I recall seeing a collection of links from Scott about alternative healthcare options or navigating the healthcare/health insurance system if you're not in an ideal position. Does anyone happen to know where that was or have a link?
(trailing_zeroes(x) & 1) is 0 if there are an even number of trailing zeros, 1 otherwise. ((x >> trailing_zeroes(x))&7)^1) strips the trailing zeroes, keeps the second and third digits, and sets the remaining to 0. Hence this tests if x = (y*8+1)*4^n for some nonnegative integers y, n.
I’m assuming trailing_zeros is the number of least significant zeros on a binary representation before you get to a one? Which is then I for the largest 2^i dividing the number. x >> trailing zeros is then x with all its factors of two divided out.
turns into (1 if i is even otherwise 0) | (first three bits of x without factors of two, with first bit set to zero as it must be 1 unless x was zero). Assuming x isn’t 0, the second part tests if the first two bits after the first 1 are both 0s. This is true if x, minus all it’s factors of 2, is equal to 1 mod 8. Bit wise or is logical or as it’s operating on either 0 or 1. So, uh, tests, if x = (2^p * s) and s has no factor of two, (p%2==1) || (s%8 == 1)? Don’t thing I quite figured it out. Does this have some number theory significance? Did I get it right?
Does anyone knows if 'Boss As A Service' is still a going thing, accepting new clients? I did sign up but haven't heard anything. Also, any recommendations for personal accountability? Beeminder isn't doing it for me, either because I need an actual person or because I need to set individual goals for each week (not a generic weekly target).
Anyone know if any good meetups for learning solidity ? I’m already on blockchain NYC, and it seems good, but they don’t meet as often as I’d like. I’d also like to hear about any good discords for solidity novices.
I think the following post is _not_ about politics. If Scott disagrees, I apologize in advance, feel free to delete this without feeling bad about it
The other day I wrote up some thoughts about the latest spike in the covid pandemic, some musing on the reliability (or lack thereof) of data, and the value of using common-sense heuristics with appropriate uncertainty. The folks on Facebook mostly ignored it, but maybe you all would appreciate it more. It is reproduced below
----
Disclaimer: the following post is not meant to imply anything beyond exactly what I am saying. Please don't assume I have other subtextual or connotative conclusions.
Austin is in the middle of a covid spike and there are a lot of different (and conflicting) messages coming from various authorities and data sources. How do we know what sources to take seriously? What should we believe?
Personally, to square this circle, I've taken a handful of heuristics that have served me well. One of them is the presumption that covid is seasonal, like basically every other respiratory illness. Based purely on this heuristic, on July 30th, I made the following prediction:
The current spike in Austin will peak between August 12th and August 18th, and fall afterwards almost as quickly as it rose.
Why did I make this prediction? Last year in July, the spike (as measured by the 7-day rolling average of daily hospitalizations) peaked 13 days after entering stage 5. Last year in December, the spike peaked 19 days after entering stage 5. We entered stage 5 on July 30th, 2021.
Sure enough, it is now August 15th. And what actually happened? The (7-day rolling average of) daily new hospitalizations were rising very quickly at the start of August. Around August 6th, the rate-of-increase began to shrink. The hospitalization rate peaked on August 11th at 83.6 (Higher than the July 2020 peak at 75.1, lower than the Dec 2020 peak at 93.7). It has been falling ever since, at 78.7 today. If it continues to fall at the same rate it is falling now, we will leave stage 5 somewhere around August 22nd.
(Note: August 11th was only a few days ago, and there is the possibility that this is premature. However, since I'm looking at a 7-day rolling average, any trend needs to exist for a week-ish before it shows up in the graph at all. This gives me confidence that this isn't a blip, but a real trend).
(Added for ACT comment: I wrote this two days ago, when the most recent data point available was Aug 13. The next four days of data continued the downwards trend, albeit with a blip today. My prediction conditional on "fall at the same rate" is likely not correct, however we appear to still be on a rapid downwards trajectory)
I would like to propose some simple questions to all of you. I am not trying to argue or convince anyone of anything; Facebook is not the correct place for that. I'm simply curious what thoughts other people have put into this, and would like to politely suggest consideration of some things people may not have considered.
2) How bad did you think it was going to be this time around? Better than this? Worse than this?
3) What were the trusted authorities (whichever authorities you choose to trust) saying about how bad it was going to be? Why were they saying what they were saying? Were they accurate?
4) Was my prediction from three weeks ago accurate?
Consider the following: There are a lot of people out there, with fancy credentials, decades of experience, deep domain knowledge, and formally recognized positions of authority on this pandemic. They have made various claims about what will happen, and justified those claims with elaborate appeals to their expertise. Meanwhile, I'm not a doctor, I have no statistical models, and no deep reasoning. I just have a simple heuristic: "it'll probably be the same as it was last time". Assuming that you agree with (1) above, my predictions in the past few weeks have been more accurate than every single public health authority I've paid attention to.
During the pandemic there has been a lot of talk of "trust the science" or "trust the experts". Science is very important and frequently correct, and trusting it is generally a good idea. But more important than trusting it is understanding it, and I've been somewhat disappointed in what I've seen in that regard. Part of understanding science is understanding its limitations, and its biggest limitation is time, effort, and data. Science takes time and requires a large amount of data and analysis before we can make strong and confident predictions. Despite today being March 533rd, 2020 (https://calendar2020.noj.cc/), 500 days isn't that much time by scientific standards (something something Zooey Deschanel). And despite our increasingly quantified society, much of our data is still very noisy and less objective than we'd like to believe. While we can, should, and are doing all the hard work of scientific analysis to better answer these various questions, we should also be modest and be careful not to overstate our confidence in unreliable data.
It's easy to be bamboozled by charts and graphs and numbers. But at the end of the day, things still have to make sense. Things are roughly consistent over time and space. Magic doesn't happen. Everything has to be coherent with itself and each other, and unprincipled exceptions to basic principles rarely if ever happen. All the data in the world won't help you if that data is noisy, unreliable, or incomplete. Meanwhile, basic common-sense rules of thumb, based on the above simple principles, taken with the appropriate amount of uncertainty, is frequently a very good way of making accurate and well-calibrated predictions.
I'll leave you all with a parting question. Our leaders invoke complicated and elaborate statistical models, detailed research papers, and decades of subject matter experience and they make one prediction. I ignore all of that, and make a different prediction based on nothing other than the basic scientific principle that patterns are real. Their predictions took millions of dollars and thousands of man-hours of time to make. My prediction took about 15 seconds of eyeballing a graph (copied below, so you can check my work). My predictions are more accurate than our experts'. So the question is: What do we do with this information? If basic rough guesses are more accurate than all that scientific work, how should we respond to this in a way that gives us the best predictions and information going forward?
https://files.catbox.moe/xa0jt6.jpg - annotated graph of 7-day rolling average of daily new hospitalizations, taken straight from the City of Austin's Staging Dashboard
I think the problem is that if we just look at 15 seconds of eyeballing a graph, we end up with COVID is just another swine flu, SARS, Ebola, West Nile, etc and won't really affect the US. Was it a black swan or were we looking at the wrong graph?
Also is the scientific prediction predicting the graph or causing it? Is the Covid season "the July and November/December holidays" or is it "when things are so calm everyone relaxes" followed by the thing that ends Covid season: "people panicking"?
My 10 year old daughter has gained weight during the pandemic, mostly as a result of her exercising and playing less. She went from slim to chucky, with an obviously fat stomach and face. Now I'm concerned that her body is defending this new setpoint.
Does anyone have any advice for helping her lose weight? Yes, I am trying to get her involved in sports, but it's been slow so far. We cook almost all meals at home and have a fairly healthy diet.
Look up rec. calorie intake for her height and age, try to give her that in healthy food (whole grains, ferments, colorful vegetables, etc. Also remember that some more caloric foods like meat and cheese are more satiating than an equal amount of vegetables or carbs. Kids seem to benefit from having proteins and fat in their diets.)+/- a bit, and try to get her exercise every day.
She doesn't need to be doing squats or something, just take her on a walk around the park and encourage active play.
Basically, as long as you don't let her eat processed snacks and deserts and/or sweetened drinks (Fruit juice counts! IT's about as bad as soda!) all the time, you'll probably be fine.
Fruit juice isn’t necessarily as bad as soda. It depends a lot - you can make “fruit juice” at home by just grinding up whatever fruit you have and putting it in a cup (maybe more of a smoothie), and there’s a range between that and “sugary water with extracted fruit flavoring added”. I’d still recommend fruits over juice though, and most juices tend towards the second.
Umm I'm not sure why people still don't know this, but weight loss is 90% diet. So keep involving her into sports (as it's very healthy), but for weight loss just keep cooking her healthy food, make sure she doesn't eat too much BS outside from that, and she'll lose weight
Her diet did not change, but her outdoor activities did. I mean, is possible she eats more now than then, but the types of foods have not changed. And we eat a quite healthful diet.
Don't you have control over how much food your daughter eats?
(I'm not suggesting outright denying meals, but more measured portion control. And I'd suggest doing the maths, as others have suggested, to ensure she's actually obese before considering these kinds of measures - the classical "overweight" range is not actually harmful.)
I don't have specific advice but I would disagree with the people saying not to worry about it. Being fat in school is horrible and being fat in general is horrible ( I am fat ). I started putting on weight when I was 11 after I stopped playing soccer and it has been a constant blight on my life, my parents made no effort to help me lose weight or stay in shape after I started gaining and it is basically my only complaint about how I was raised. Good luck.
It depends if she's putting on fat because of over-eating and lack of exercise, or if she's putting on fat because her body is preparing for puberty. If she is not grossly overweight by the BMI for her age and height, then I wouldn't worry too much.
Stressing out over weight at this age will only sow the seeds for eating disorders later in life. Cut out junk, get more exercise, watch and see how her body responds. Making a big deal out of it or neglecting it are both bad approaches. Making a ten year old self-conscious about "I'm too fat" leads to a host of bad results, even if the intentions of the parents are good.
It's true. I managed to lose a lot of weight and since regained it, when I was skinny people treated me differently. It's only attractive people who say looks don't matter.
Back in the day that was called "puppy fat". If she's ten, then her body could be gearing up for puberty (average age for girls is eleven). If you're concerned, then the advice seems to be to check BMI and compare her weight to the average for her sex and age:
This seems right on, I (male) did this too (around 11-12) and ended up a very skinny teenager/young adult. I only assume the pounds I put on the last couple of years (mid-thirties) are only just preparing me for another growth spurt.
I'd hazard a guess that her body's getting ready for a growth spurt. Wouldn't worry too much about being svelte at 10 when she'll be morphing into a teenager over the next few years.
I saw someone on Twitter who disputed a medical bill that he received by arguing that they had given him an extremely complex code when a simple one was necessary. It occurred to me that would be an amazing public service if there were a publicly searchable database of all billing codes. I believe that new legislation now allows all patients to read their charts, which would enable them to access these codes. If not, who could create it? Would it violate some sort of intellectual property to create a site like this?
Spoke to doctor's office, asked them what vasectomy would cost. They told me cash cost. I asked how much it would be on insurance, they said they had no idea it is based on my plan, etc. I said no, how much would you charge my insurance based on negotiated rate, I could figure out my share from there (which would be all of it, HDHP). Doctor's office said they had no way of knowing until I go in for an initial consultation (which I would be charged for) but I could call my insurance.
Called my insurance and they explained they were legally prohibited from disclosing the negotiated rate with the provider. I could call the provider back and get the exact ICD code they would use and then call the insurance back and they might be able to tell me what is the generic (not specific to that provider) negotiated rate for that ICD code.
This is a common, completely elective procedure that should be the same as LASIK or plastic surgery, but whatever. I ended up going out of network with someone who told me a reasonable cash cost.
Anyway, just something to say that even if ICD codes were easier to navigate there are a number of roadblocks in the way of consumer empowerment.
"Would it violate some sort of intellectual property to create a site like this?" - as far as I know no, but HIPAA specter hovers over it.
Are you liable if someones enters own codes without understanding how much it reveals?
Are you liable when someones entered code of their mother / friend / employee?
Are you legally liable when someone entered lies and someone else get cofused? (you are definitely sueable as usual and it may be not straightforward to dismiss)
> If not, who could create it?
Anyone? For start it could be a Google doc or standard wiki install. There are many places where it could be hosted for free, for example wiki on an empty Github repository.
To be more specific: individual facts are not covered by copyright and in USA uncreative collections of facts are also not ( https://en.wikipedia.org/wiki/Database_right are not a thing in USA )
And still, entering it code by code likely would not trigger database rights at all.
Related to this question, are there any services out there that automatically submit similar claims on behalf of patients? I know there are services that contest tickets and have a pretty high sucess rate. Something comparable for medical bills would be significantly more useful, assuming that contestable overbilling happens moderately often.
I have read a few defenses of logical positivism lately, not because I was seeking them out but rather I happened to come across them. In particular, Liam Kofi Bright has appeared on a few podcasts arguing for it, and Scott himself has written something kind of like a defense of it. Wikipedia, however, describes it as "dead, or as dead as a philosophical movement ever becomes", and as far as I understand there were some slam-dunk criticisms of it that really did show it being untenable (although I personally have not looked into them very much yet). Is there now a resurgence in logical positivism, or did I just happen to stumble across the only two people who will defend it?
It's typically more Popper, Quine, and Kuhn that were considered to have put the nail in logical positivism. They may or may not have been "correct" full stop in their own ideas, but by at least trying to formalize science by reference to the actual practice of scientists, rather than trying to derive it from first principles, they got a lot closer than logical positivism.
If that didn't do the trick, then information theory and statistical estimation and learning should have put the truly final nail in the coffin. It's pretty clear at this point that statements don't need to be empirically verifiable or logical provable to convey information.
"If that didn't do the trick, then information theory and statistical estimation and learning should have put the truly final nail in the coffin. It's pretty clear at this point that statements don't need to be empirically verifiable or logical provable to convey information."
I wonder. I do statistics for living, and all examples where statistical inference says anything about world needs data that originate from empirical (including observational) sources -- and speaking of observations, one can get only a limited view into causality with observations alone without conducting experiments. I surmise I could subscribe to some sort of statistically informed logical (or rather, probabilistic?) positivism, if someone would formulate such a theory.
I have not read Quine nor Popper's philosophy of science. I have not exactly read Kuhn either, but the meaty points of Structure of Scientific Revolutions are described in many textbooks and reviews.
What I have understood of Kuhn and other late 20th century philosophers of science seem to pose descriptive theories that seem more sociological, and thus say nothing useful to a scientist (or anyone else) who'd like to know more about how to get closer to truth.
In sociological sense, Feyarabend might very well be right in the sense that one can find some corner of "science" where anything goes if it is the fashionable way of doing things, but it certainly does not sound very good way of getting better at science.
Logical positivism had one inspiring thing in it, the confidence that there is a scientific method to be found. Looking at everyone involved from a century afterwards and without getting too deep into their differences, logical positivists start to look not so different from Popper's falsification.
I don't think the takeaway should be that logical positivism had nothing at all going for it. The problem is that a core claim was statements that can't be empirically verified or logically proven don't have meaning or information content. Probabilistic statements, even if the probabilities are derived from empirical observation, inherently can't be empirically verified. They can only be estimated. You can't actually run all possible worlds in parallel to measure the density of observed positive trials. That is what I meant by them putting a final nail in the coffin.
I definitely think you should maintain confidence that there is a scientific method to be found. It just needs to be found by observing what scientists actually do and what seems to work better than not. Ironically enough, logical positivism failed because it wasn't itself based on empirical evidence. I think what we see with science in practice is falsification more often than verification, and probabilistic statements more often than absolute statements.
> Probabilistic statements [..] inherently can't be empirically verified.
Sounds like a poor definition for "empirically verified". On the contrary, I have not seen empirical verification which isn't statistical statement. That is why I suggested a *statistically informed* reformulation of positivism.
>It just needs to be found by observing what scientists actually do and what seems to work better than not.
Sounds great, except, what if I happen to be scientist and I want to get better at it? This is exactly why I think the sociologically-oriented philosophy does not seem to have anything to offer.
I think Godel put the nail in this theory since there will be statements that can’t be logically proven but which still have meaning. I could be mistaken, but I had thought his incompleteness proof was an example of this.
AFAIU Godel only proved the risk for incompleteness if you care for self-referential propositions. If you can live without those, you can have completeness. I don't see why we should care for those too much.
A motivation to care for them can be provided by induction. If you don’t care for about mathematical induction, you could not care about them. But Godel was specially talking about arithmetic so he the we’re important over that domain. If you never want a formal proof that uses induction then again, you wouldn’t have to worry. But I don’t know how far hon would get practically with that scheme.
Godel put the nail in the coffin of mathematical logicism, but it looks like even after Godel there were logical positivists. This seems to be some sort of contradiction for the reasons you mention, so I'm not sure how this happened. I'm looking for Liam Kofi's response to Godel, but most of his work talking about positivism is in podcasts instead of easily searchable text.
I'm in Zurich for the next two weeks. Is there any meetup here during that time period? I'd also love to grab coffee or something with anyone here. Let me know if you're interested!
I have been trying to follow the war in Ethiopia through reading any new articles I come across, but I feel like they never give a very complete picture. A few things seem very odd to me. The dispute escalated very quickly from timing of elections to a pretty brutal civil war, for instance.
Is there a good long form article that gives a good overview of the Ethiopian war so far? Or maybe a report from a think tank/gov agency or something?
"DSM Review: The Meanings of Madness: The Diagnostic and Statistical Manual of Mental Disorders, or DSM, features nearly 300 diagnoses. What’s the science behind it?" By Stephen Eide | Aug. 15, 2021
"If there really is a mental-health crisis, then doctors—psychiatrists—should have a lead role in responding to it. Rutgers sociologist Allan V. Horwitz’s history of the “Diagnostic and Statistical Manual of Mental Disorders,” or DSM—the medical field’s definitive classification of mental disorders—explores psychiatry’s claim to such authority. “DSM: A History of Psychiatry’s Bible” puts forward two arguments: first, that the DSM is a “social creation”; second, that we’re stuck with it.
My impression is that there's a very well-established critique of the DSM, which is that the people writing it and using it do not think very clearly about the differences between spectra ("tall" vs "short", where the label just describes which end of a distribution you're on) and taxa ("has the flu" vs "does not have the flu", where the label corresponds to a real categorical distinction in nature), and often make bad inferences about how mental illness works as a result of this confusion.
The linked article seems like it might just be making this point again, but without reading past the paywall I can't say whether it might also have some more novel stuff to say.
It is a book review. It is not presented for the truth of the matter. I presented it as notice of a new book that might interest Our Fearless Leader and some scientists or physicians in the audience.
It is a subject that I have no understanding of. Although my wife was a student of Alan Frances who was the reporter for DSM4 and rebuked the authors of DSM5. I once meet him socially. Lovely fellow. My wife has never expressed any views on the subject to me.
As for the paywall. That is WSJ.com's doing. Many people do subscribe to the site as it is one of the most influential. Many people may have access through an institutional library.
I ask as I noticed that I started eating less and less meat recently AND I am prone to repeating eating the same food for days AND I hate tofu/soy products/chia/quinoa. So I am worried that I should worry about incomplete protein.
But is (for example) eating a lot of rice (and just rice for 5 days) and then eating a lot of beans for next three days balancing each other or not at such timescale?
Probably relatively short. The body does not store excess protein in the diet, the way it stores excess carbs or fats. Excess amino acids are just combusted and the nitrogen immediately excreted as urea, which is why the body cannot later construct amino acids from stored fat, the way it can construct new carbs or fats.
In my opinion “just rice for five days” is a terrible idea if you mean literally only rice. But if it’s like rice for five days with a dozen other things, then beans for three with a dozen other things, maybe that’s fine. You’d probably feel a bit off of it was causing problems. But I have no clue.
People have been fine after much worse diets than just rice for five days. There's a guy who gave up food for Lent and lived on beer alone! https://www.youtube.com/watch?v=h9EEghd_TFg
On page 4 one can see a figure showing the results of essential amino acid removal from diet. It seems effects set in quickly within four days. But that’s with total removal, something unlikely in any real diet. So I doubt this has real relevance to your question, but it’s cool!
> On the 4th day of isoleucine deprivation, the output of nitro- gen exceeded the intake by 3.79 and 3.90 gm. respectively. Both young men complained bitterly of a complete loss of appetite. Each experienced great difficulty in consuming his meals. Furthermore, the symptoms of nervousness, exhaustion, and dizziness, which are encountered to a greater or lesser extent whenever human subjects are deprived of an essential amino acid, were exaggerated to a degree not observed before or since in other types of dietary deficiencies, It became evident by the end of the 4th day that the subjects were approaching the limit of their endurance, and that the missing amino acid must be returned to the food without further delay.
I second this question. As a sometimes vegan, I've gotten kind of religious about making sure I get complete proteins with each meal, minimum on a daily basis. Would love to find out that I can be more lax about it.
also, @traveling_through_time, have you tried marinating tofu and searing/baking it? Something like this: https://www.noracooks.com/marinated-tofu/ Unmarinated tofu is the literal worst, good marinated tofu rivals the occasional steak I have.
Also, seitan can give pretty good mouth feels, and has a pretty mild flavor so you can make it taste like most anything. It's not a complete protein, so sometimes I do like ground seitan in black beans to get a sort of ground beef filling for tacos.
I tried tofu some times, and even when everyone praised it as very tasty I needed to force myself to eat it.
My conclusion so far was that tofu is simply one of few foods that I really dislike. And I am not too picky "eating a lot of rice (and just rice for 5 days)" is something that actually happened.
Fair. I've been trying to make myself like tomatoes for the last 15 years, no dice yet.
The marinade is important largely to let the salt permeate the tofu, otherwise it tastes really dull. If you bake it (my recommendation), the "trick" is to bake it in 1/4 inch slices until its browned-almost-burnt, but not burnt, gotta flip once or twice. It fixes the texture problem giving it a better mouth feel, and I imagine does some chemistry shit that makes the falvors come out.
The real problem IMO with _not_ using tofu or seitan (or nuts, I guess) is that I don't see how to get a decent macro balance - I always end up super carb heavy. The macros of e.g. straight chickpeas are just about perfect, but you combine them with something to make it a complete protein and your carbs go off the chart. Beans and rice, same deal.
Substack is reverting comments to "New First" every time I load a new article. It didn't do this for a while (not sure whether it reverted to chronological or was sticky). Is this an intentional change?
I think "New First" is the default for Open Threads and "Chronological" for other posts. (I'm also think I remember a community discussion where this emerged as the consensus, so pretty sure its intentional.)
Any industrial engineers in here, particularly with experience in safety engineering? I'm wondering if the discipline covers such things as individuals taking initiative to prevent disaster. It seems there are disasters (Challenger, Chernobyl) that could have been prevented by a well placed individual taking actions that would have seemed excessive to individuals who thought there was no danger.
I do mission assurance for space flight, mostly but not exclusively unmanned. This strongly overlaps safety engineering, in that a billion-dollar rocket and payload exploding is a Very Bad Thing whether there are people on board or not. And in that there are many people involved in the process of making sure things go as planned.
And if it's important, it's too important to be left to individual initiative. Individual performance varies too wildly to be counted on; you need a process where if some people screw up and everybody else just does their job by the numbers, the people just doing their job by the numbers will catch the screwups before anybody gets killed and/or any rocket explodes. Sometimes that doesn't happen, in which case it would be nice if someone exercising extraordinary initiative were to somehow fix the problem. And sometimes that part *does* happen, which is great when it does. But you can't count on it, and it can't be part of your plan if it matters.
Also, too *much* initiative sometimes means that where you once had a process that would have worked, now you have a dozen undocumented variations made by people who thought they found a way to make it "better" but who collectively opened a gap for failure.
I'm also a pilot, and in that realm we all understand that sometimes it will come down to one or two people having to fix the problem unaided and on short notice. If that happens, initiative matters and pilots are expected to exercise it. And are trained and tested to ensure they have the skill, knowledge, and judgement to exercise their initiative properly. But even then, we start with checklists and procedures designed to reduce as much as possible the slight chance that a pilot might have to exercise initiative.
Initiative is better suited to accomplishing great things, than to avoiding great catastrophes.
I also hear, however, that practically all organizations that undergo this process of becoming High-Reliability Organizations (you appear to be describing the processes of one), only undergo this transformation in the aftermath of catastrophe.
Is there any advice on how to transform an organization into an HRO, when the organization does not think its actions can lead to disaster?
An easier way to do it would be to selectively hire people from other HROs and build in a very conscientious way. Essentially piggy-backing on previous catastrophe without experiencing it yourself as an organization.
I think you can create such an organization in other ways, which would be much harder and less complete. What that requires is a founder that is very much invested in making an HRO, and emphasizing that throughout the organization's history. They would hire people with specific functions and goals, write policies that further those specific goals, and incentivize those goals. You couldn't reward success if it meant incentivizing non-HRO behavior. The major problem with that is new companies tend to succeed by being nimble and making fast changes. HROs succeed by making slow decisions that are carefully thought out. In order for an HRO to survive being created, it needs very strong financial backing and patient lenders/owners. Amazon (not necessarily an HRO though), to a large extent, seems to have had that. Jeff Bezos was willing to forego personal wealth and cashing out over a pretty long period of time, and kept the money going back into the company. He did that for eventual super money and that worked out well for him. A company trying to do that for reliability may have a harder time getting there. That said, SpaceX and the other private commercial space companies seem to be acting as HROs without personally experiencing major catastrophe. Of course, they have had a series of smaller catastrophes and a lot of their workforce came from existing HROs, so that's ground I may have already covered and not a new thing.
>That said, SpaceX and the other private commercial space companies seem to be acting as HROs without personally experiencing major catastrophe.
SpaceX hired a lot of people who would have been right at home in the "move fast and break things" culture of Silicon Valley, but as its president and COO it went with Gwynne Shotwell, a veteran of one of the oldest and most highly regarded HROs in the business. Disclaimer: it's the same HRO I now work for.
The other private space companies haven't done enough space flight for a lack of catastrophes to be really telling. And some of them have had more than their share of catastrophes before ever reaching space.
Specifically, I'm trying to figure out what it would take to have AI research conducted in a safe way. It sounds like pretty much all AI research should occur in an HRO context, but the puzzle is to figure out how to do that.
I'm also reading Engineering a Safer World on the recommendation of another commenter, which says the HRO paradigm is obsolete, but that hardly matters to my purposes: the idea here is to figure out how to conduct AI research safely, not to wed to a specific paradigm. I suspect the ideas in Engineering a Safer World can be applied to the AI community, but we shall see.
Your idea of getting people with HRO experience into this is probably worthwhile.
What the AI researchers are doing right now is not good enough. Something else needs to be attempted, and we have managed to do novel stuff safely before. It's one thing if there had been something equivalent to the Asilomar conference on recombinant DNA, but the one that happened on AI really was not the same at all.
Lay off all but your very best people, sell everything including the buildings at auction, and start over someplace new?
Unfortunately, I don't know of any recipe for turning a non-HRO into an HRO without a catastrophe in the middle. Nor can I think of any examples offhand, except by building a small HRO in an isolated corner of a larger organization, but in that case I don't know of any (non-catastrophic) way to pull the HRO-ness into the main organization. Preemptive institutional reform is a hard problem.
I'm fortunate enough to work for The Aerospace Corporation, which was created for the specific purpose of being an HRO and which was done right. That's not too difficult to do if you care, and there are plenty of examples to work from.
I would say that if individuals taking unusual initiative risky to the careers is necessary to prevent disaster, then it is a clear evidence that management, processes and safety engineering catastrophically failed.
(in software engineering it is often stated that system where someone can make a mistake and cause outage/failure/catastrophe is a system that is inherently unsafe, as mistakes are normal)
Well, if we're doing safety-in-depth (we should), individuals should be prepared to make such decisions. There is no particular reason that management, processes and safety engineering, should be the only layers of defense in the system, though of course, ideally they should also be there.
But where people sacrificed (or would need to sacrifice) their careers to stop catastrophe is system that would block/eliminate useful well placed individuals that you mentioned.
I don't see how. Even if the RBMK reactor had been much better designed, and had a containment structure, some bizarre accident could still have happened, leading to the same situation of an irresponsible leader (Dyatlov) giving insane orders. Though of course, the scenario becomes much less likely, especially if all the people involved are very safety minded.
For many systems, it does not seem possible to remove humans out of the loop entirely.
Sure, but in a safer (Western style) reactor, the worst case scenario from following insane orders is a ordinary meltdown a la TMI that will not spew radiation into the environment. In a molten salt reactor, it's probably impossible to cause damage by doing something in the control room (it seems to me that the worst thing you possibly do in a molten salt reactor is add too much fuel. But even that shouldn't be dangerous if the reactor shuts down automatically on overheat)
It's good that it's possible to design safe reactors. But the broader conversation here is about the role of individual initiative in preventing disaster, which in my view, includes such things as ensuring we're not working toward Chernobyl-style blunders.
How do we know we're not currently building our very own RBMK-reactor-style deathtrap, but in another domain?
I was mostly thinking about social design (Glorious Leadership of Party is never wrong, wrongthink can send you to a gulag, Lysenkoism-type mess in science, no decent alternatives at all to party controlled jobs, official approval of various kinds necessary to get anything from meat to radio to apartment).
I am not expecting idiot-proof reactor that can be operated by children, but lying about defects, concealing failures, leadership ignoring facts, superpowerful leadership, no recourse where said leadership wants you gone and so on are a serious problem.
And not-so-bad reactor design would help significantly. There were many footguns, one of the more well known was that emergency scramming of reactor initially produced power spike - the exact opposite of desired or expected effect [1].
And the response after is hard to even parody (though this Netflix pseudocumentary managed to do this anyway).
> Consequently, injecting a control rod downward into the reactor in a scram initially displaced (neutron-absorbing) water in the lower portion of the reactor with (neutron-moderating) graphite. Thus, an emergency scram initially increased the reaction rate in the lower part of the core.[7]:4 This behaviour had been discovered when the initial insertion of control rods in another RBMK reactor at Ignalina Nuclear Power Plant in 1983 induced a power spike. Procedural countermeasures were not implemented in response to Ignalina. The UKAEA investigative report INSAG-7 later stated, "Apparently, there was a widespread view that the conditions under which the positive scram effect would be important would never occur. However, they did appear in almost every detail in the course of the actions leading to the (Chernobyl) accident."[7]:13
Netflix pseudo-documentary? You mean HBO's Chernobyl? Was it really a cartoonish depiction, and if so, how? I know about the woman scientist being an invented character, there not being a big dark plume, or trees surrounding browning practically immediately, but not anything that would make me conclude that the whole thing is essentially a lie.
I would recommend looking into the study of high reliability organizations (https://en.wikipedia.org/wiki/High_reliability_organization), which looks at the organizational factors that allow groups that would otherwise be considered high risk to operate without major failures or disasters.
Interesting stuff. While the AI research community lacks some of the features of HROs (compressed time factors, high frequency immediate feedback) they need to apply a lot more stuff from them (high accountability, operating in unforgiving social and political environments, an organization-wide sense of vulnerability, widely distributed sense of responsibility and accountability for reliability, concern about misperception, misconception and misunderstanding that is generalized across a wide set of tasks, operations, and assumptions, pessimism about possible failures, redundancy and a variety of checks and counter checks as a precaution against potential mistakes are more distinctive).
Please, I'm in tech. We're not engineers for the most part. Even at the highest levels, where people work on stuff that's supposed to be highly reliable, they're not working with the mindset of an engineer designing a bridge, much less an engineer working on a life-critical system.
Check out Nancy Leveson, especially her book "Engineering a Safer World". She writes a lot about common misconceptions about operators and safety engineering. One of her claim is that accident investigations often end up unfairly blaming the operators, who are almost always doing what they think is right based on their model of the system. The book describes strategies for designing systems to help operators make good internal models.
On your point, I think she would disagree. As I said, operators (and people in general) does what they think are safe. Accidents often lack a single cause, and systems migrate towards unsafety unless effort and design is put in to fight back (compare with entropy). Chernobyl had plenty of problems and if someone had heroically prevented the accident, all the problems would have remained for the future. What is needed is strong leadership and strong safety culture trough through the organisation (which is hard but not impossible).
It's true that in Chernobyl, if the technicians had forcibly thrown Dyatlov out of the room, it might have prevented the disaster that night, but then, they would have had no way of ensuring the same mess did not happen in the future.
But what about the Challenger? Roger Boisjoly, unlike the technicians at Chernobyl, knew exactly what was wrong with the Challenger, but the management at his consultancy and at NASA was reckless. Was it really impossible for him to save the Challenger?
Challanger had the same problem. Management wanted to fly, information that prevented flying never travelled far up the hierarchy since no-one wanted to bring up bad news. Disaster could theoretically be pushed a few years into the future (as with Chernobyl), but the fundamental problem wouldn'tbe fixed.
NASA management was the most reckless. Morton Thiokol's management did see the O-ring problem that Biosjoly and other engineers detected, and the night before the launch, actually recommended no-go on it, but NASA management got pissed and pressured them to say go (apparently a highly unusual action), and Thiokol management relented.
So we have NASA management being reckless, and Morton Thiokol management and engineering being spineless as the final causes of the Challenger disaster.
Sure, ideally systems are such that a situation where someone needs to grow a spine does not arise, but it does seem to me that someone growing a spine would have prevented the Challenger disaster.
In the USSR yes, but I don't think the US is so dysfunctional that it can only learn certain lessons in the aftermath of disaster. Forceful action could have escalated the problem as far up the ladder as needed.
Anybody got a covid vaccine effectiveness update? I'm trying to convince some hesitant friends of mine that are trying to get a dr's note, even though they are in risky careers (Nurse and Fireman).
Regarding the meetups, are you going to contact people and give them a chance to sort their plans out before posting all of them? I just put in a placeholder place and time, since I don't expect to have to be the one to host anyway, but just in case.
No, I wasn't planning on contacting all 200 (!) of the people who volunteered. I hope most people entered true information, if not I'll try to solve it after the fact.
Was the placeholder information obviously (as in, ridiculously, impossibly impractical) a placeholder?
If you said “the middle of the city at 12am”, I would not be shocked to see someone looking confused at the default pin location for your city on google maps at midnight, given a large enough population.
I don't remember what I put. I just wanted to say I would be willing to host if absolutely nobody else would be willing to. Is there a way to view or edit my response?
I'm going to see a psychiatrist for the first time, and since appointments are very expensive and hard to come by where I live, I want to try to get the most out of it. Was hoping people here might have some advice. I've written a summary of the situation and then some of my questions at the end.
Basically I'm concerned I might have undiagnosed ADD. When I was a kid, many teachers and (I think but not sure) my GP suggested I might have ADD based on my behavior, but my parents were pretty ideologically opposed to ADD diagnosis of children in general and preferred the explanation that I was a gifted child who was bored.
My whole life I've had a pretty typical list of symptoms - I struggle to concentrate on tasks and I always feel scattered and lost among different threads of thought and attention. If I do concentrate I hyperfocus and I have to whip myself up into a highly stressed state to be able to do it, I always do all my work at the last minute in a state of panic, I struggle with simple life admin tasks. I'm very restless and I'm always jiggling and moving or I catch myself jumping up and moving around for no reason. As a kid I had a lot of social difficulties but over time I've learnt to mask these pretty well. I acted up terribly in school when I was younger but calmed down as a teenager. I watch other people sit down and do a few hours of gently focused productive work without distraction and I can't imagine ever being able to do that. I've also suffered from (and been diagnosed with and treated for) depression during several periods since I left school, but I was also pretty depressed as a teenager.
I'm worried that a psychiatrist will dismiss this because I did very well at school and at university. I suspect that having very high intelligence has concealed the issues with my attention, and I think I'm actually massively underachieving. Consequently I've worked in jobs that haven't been very challenging or interesting, and I've become bored and left after a couple of years in each one. The last job, though, required a lot of 'self starting' planning and focused work on tasks that weren't just given to me on a proverbial conveyor belt, and I struggled hard and often spent hours just staring at my screen panicking about what to do next, waiting for an email to react to.
Now I'm self-employed and making a modest living, largely enabled by a partner who earns much more than I do. I'm not doing anywhere near the requisite amount of focused work to make my business grow and thrive. If the next decade passes like the one we just had, I'm going to be extremely unhappy, and either very poor and single or largely dependent on my partner. My depression will come back. I DO have the skills, knowledge and ability to do really well at what I do, what I need is just to be able to sit down and do good, focused work for hours a day, every day, without having to engineer emergencies that ramp up my anxiety to unsustainable levels. I drink ridiculous amounts of coffee and find I get a good productive couple of hours after that (like right now.)
So questions:
1. I really want to try medication for this - probably Ritalin. Many years ago when I was at university I tried dexamphetamine (in irresponsibly large recreational doses) on about five occasions and found it really enjoyable for meditating and reading. I stopped experimenting with it when I noticed I felt a compulsion to take more, and never took it again since. I remember the feeling of being on dex as having my attention turn into a spotlight I could direct at will wherever I wanted and hold it there easily, and it was very relaxing. I think to the psychiatrist I will seem to know a suspicious amount about these medications and I'm worried he might take me for a drug seeker - although this is a pretty perverse bind to be in, because I am indeed seeking the drug that is proven to be effective against the condition I think I have. How do I talk about these medications and the research I've done on them without sounding like I'm just trying to get a script to get high? Should I not mention previous recreational use from ten years ago?
2. How do I persuade the psychiatrist that I'm actually underachieving, and that I'm not simply regularly achieving and beating myself up unnecessarily? I think this will be easier than before, since I'm much more precariously employed so my life looks worse on paper than it did when I had a steady job. How do I explain how serious my situation is?
3. How do I explain that I've already tried so many different methods like meditating, productivity software, GTD, scheduling my day in 15 minute chunks the night before, and I just can't seem to finish any complex, multi-stage projects? What do I do if the psych says 'thanks for waiting three months and giving me all that money, now go away and try writing down your goals for the day the night before'?
In my personal experience, it will have more to do with the psychiatrist than it does with you. Some are skeptical, others hand out Adderral like candy.
Poor focus, low motivation, depression and other cognitive effects are also symptoms of low testosterone. How likely this is relative to ADD depends on other lifestyle factors, like your body composition, diet, whether you lift weights, etc. Don't be too convinced by a single diagnosis right off the bat, and don't think cognitive symptoms necessarily means it's purely a cognitive problem. Hormone tests are easy, so why not cross out that possibility right out of the gate.
That's interesting. Is low testosterone something that can be corrected, either through treatment or some other fix? I have seen many "hormone balancing" advertisements, mostly it seems for weight loss, so I am pretty skeptical about how well those things work. Never heard of the hormone thing changing other mental aspects.
It's ultimately about having the right balance of testosterone, SHBG, estrogen, etc. Natural ways to improve hormone levels, in rough order of significance: get sufficient quality sleep, lose body fat, add muscle mass, dietary changes (less sugars, more fiber, more protein, more cruciferous vegetables).
Testosterone levels also decrease 1-5% per year after 30 as well, so aging is also a factor that will exacerbate any pre-existing imbalance. Maybe crotchety old men mostly just have low T.
There are of course more direct interventions like TRT, which come as skin patches, gells and injections, but obviously healthy lifestyle changes are preferable because they have all sorts of synergistic effects.
I could have written this myself! So many things in common. Deadline anxiety as the only reliable motivator. High intelligence allowing for close-to-zero effort graduation. Occasional inhumane bouts of very deep concentration (as a kid, I have sometimes gone so deep to not respond to people talking to me, earning the remark "he is in trance again"). Except for the economic outcome: I have somehow became a well paid executive, earned money, invested well and gained total independence. Now, though, the ADD problem is even bigger: it is very difficult to motivate myself to do anything productive when I really don't have to. And there is only so much satisfaction you can get from sport and hobbies. Overall life feels rather good though.
I have been exactly where you are. I thought at first me wife was posting using a version of my story to get advice for me…
Chin up, in my experience getting psychs to not prescribe you aderol is more difficult than getting it. There is also apparently a type of ADD that presents as “brain does fine on tough stuff, but can’t be bothered to even call it noon easy stuff.” That’s what I was diagnosed with, so don’t sweat over achieving; it’s apparently not uncommon.
That said, I am not sure how much Aderol actually helps. Seems to a bit, but much like coffee for me it doesn’t do much if I take it regularly. I have yet to take enough to be able to work like a normal person, much less my over achiever wife, largely because I am scared of how much that might be and whether or not I would be able to stop.
Best of luck to you, and let us/me know how it goes.
I agree with Friv Jones that you are overrating how suspicious the psychiatrist is likely to be of your intentions. Many psychiatrists mostly prescribe meds and believe in them as useful change agents. Getting meds out of them isn't like getting admitted to Harvard Law School! If you encounter any problem in getting a script for ritalin or similar, I think it would likely take the form of the psychiatrist being in favor of your trying a different class of drugs, such as an antidepressant, before you try some form of speed. (And that might indeed be worth trying, especially if you have not tried an antidepressant before.)
The likelihood is that if you go in asking for an ADD med you will get one. Here are 2 things to consider before you go try to do that:
-There are a lot of things other than simple brain-not-working-right ADD that can cause somebody to have trouble with attentiveness, follow-through on plans and executive function in general: Preoccupation with matters other than task at hand; fear of failure at task; lack of appetite for task; dislike of task-setter; chronic pain; misery; anxiety. Productivity software and other surface fixes are not likely to be helpful if something of this nature is what's wrong.
-Taking adderall or similar is not going to tell you whether you truly have brain-not-working-right ADD, because you are pretty much guaranteed to feel better after you take the tablet: more optimistic, more energetic, more able to focus. So if a prescribed upper makes you feel that way, all you're going to know is that you're typical. Over the next year you'll probably be able to tell whether the drug is fixing whatever's wrong (or at least whether taking the drug is worth it): If on the drug your life changes in a big way -- if big trends change, if you're able to meet some long term goals -- then yeah, drug's probably making a difference. If it's more that the drug reliably makes you perkier, but you're not really advancing and feeling a lot more satisfied with how you're using your time -- then it's just basically a coffee habit, only worse for you.
I have sought and obtained scripts for ADD drugs at several disparate times in my life, -- meaning I have had a lot of first meetings with new people to whom I must explain my situation.
I think you are overrating how suspicious your psychiatrist will be of your intentions. Your ambition to get a prescription to that specific drug is maybe a little narrow -- but hopefully your psychiatrist will expose you to other (similarly effective) options as a part of the discussion. I also think you are overrating the likelihood that your psychiatrist will attempt to evaluate how much you achieve. They will not ask to see your resume or your grades, for example. I don't expect that they would try to run you through, like, coaching techniques like journaling.
I don’t have any advice. I’m just chiming in to say I feel like I could have written many passages in here. I’ll have spurts where I can focus, but then it feels like weeks go by where I’m lucky if I’m productive for one hour per day. I know I’m worse when I’ve been drinking in the previous couple days or my sleep is bad. I’ve reals all the books on focus, concentration and productivity too. Often a good one will inspire me for a week or too, but sooner or later I end up back in the same rut.
I'm very interested in answers to this. I'm about one step before this person in the process (ie haven't yet sought a referral to a psychiatrist) but the backstory and concerns are otherwise basically identical.
I'd be open to a meetup in Taipei, but the situation here is not really like other places. Low vaccine access, but also close to zero cases thanks to entry quarantines and testing. I'd feel very safe, but not sure that's universal.
Any interest (or opposition to this happening at all) here?
Would you feel comfortable emailing it to me, either the link or just as a video file? <my username> + <the letter n> at gmail dot com. I don't work at substack but I maintain an extension that makes the experience more bearable.
I've been trying to find a plot of the frequency of extreme weather events over time to see if they're becoming more common with climate change. However, it's frustratingly hard to find. The best I could do was this graph from the Met Office: https://www.metoffice.gov.uk/weather/climate/climate-and-extreme-weather
...but it doesn't link to any paper, nor does it explain its methodology. Does anyone know of a highly regarded paper, preferably a review paper or meta-analysis, that shows whether or not extreme weather events have become more frequent with time?
Hi. Based on your replies here, I note that you don't want commentary / projections, just studies of historical data. I try to accommodate that below: I've limited it to a maximum of three references per extreme weather event type, as you indicate below that you don't want too many papers. Let me know if this hits the mark for what you are requesting, and please clarify if it doesn't.
Temperature extremes:
Dunn, R. J. H., Alexander, L. V., Donat, M. G., Zhang, X., Bador, M., Herold, N., et al. (2020). Development of an Updated Global Land In Situ-Based Data Set of Temperature and Precipitation Extremes: HadEX3. J. Geophys. Res. Atmos. 125. doi:10.1029/2019JD032263
Alexander, L. V. (2016). Global observed long-term changes in temperature and precipitation extremes: A review of progress and limitations in IPCC assessments and beyond. Weather Clim. Extrem. 11, 4–16. doi:10.1016/J.WACE.2015.10.007
Zhang, P., Ren, G., Xu, Y., Wang, X. L., Qin, Y., Sun, X., et al. (2019c). Observed Changes in Extreme Temperature over the Global Land Based on a Newly Developed Station Daily Dataset. J. Clim. 32, 8489–8509. doi:10.1175/JCLI-D-18-0733.1.
Heavy Rains:
Zhang, W., and Zhou, T. (2019). Significant Increases in Extreme Precipitation and the Associations with Global Warming over the Global Land Monsoon Regions. J. Clim. 32, 8465–8488. doi:10.1175/JCLI-D-18-0662.1.
Sun, Q., Zhang, X., Zwiers, F., Westra, S., and Alexander, L. V (2020). A global, continental and regional analysis of changes in extreme precipitation. J. Clim., 1–52. doi:10.1175/JCLI-D-19-0892.1.
and the Dunn et al study above.
Floods:
Do, H. X., Westra, S., and Leonard, M. (2017). A global-scale investigation of trends in annual maximum streamflow. J. Hydrol. 552, 28–43. doi:10.1016/j.jhydrol.2017.06.015
Gudmundsson, L., Boulange, J., Do, H. X., Gosling, S. N., Grillakis, M. G., Koutroulis, A. G., et al. (2021). Globally observed trends in mean and extreme river flow attribution to climate change. Science (80-. ). doi:10.1126/science.aba3996.
Droughts:
I'm going to be honest, the literature here is too complex to grab a few papers, given that there are different types of droughts. I omit this due to not knowing what to present to you, rather than due to a lack of research (almost the opposite problem!)
Extreme Storms:
- Tropical Cyclones
Kossin, J. P., Knapp, K. R., Olander, T. L., and Velden, C. S. (2020). Global increase in major tropical cyclone exceedance probability over the past four decades. Proc. Natl. Acad. Sci. 117, 11975–11980. doi:10.1073/pnas.1920849117
^Note the published correction for this as well.
- Extra-tropical storms
Wang, X. L., Feng, Y., Chan, R., and Isaac, V. (2016). Inter-comparison of extra-tropical cyclone activity in nine reanalysis datasets. Atmos. Res. doi:10.1016/j.atmosres.2016.06.010.
- tornado (US)
Gensini, V. A., and Brooks, H. E. (2018). Spatial trends in United States tornado frequency. npj Clim. Atmos. Sci. 1, 38. doi:10.1038/s41612-018-0048-2.
Thank you! This is great. I've skimmed through most of the papers. The most striking findings include:
Extremes:
Increase in number of heavy precipitation days by 2 days (1900 to present)
Rx1day (max precipitation in 1 day) increased by several mm from 1900 to present
TX90p (% days when daily max temp > 90th percentile) increased by ~30% from 1900 to present
TXx (maximum Tmax) increasing by 0.13 C/decade from 1951 to 2015, with TNn (minimum Tmin) increasing by 0.4 C/decade.
Heavy Rains:
In monsoon regions, an increase in annual maximum precipitation by about 10% per K of warming (Zhang & Zhou 2019)
"The global median sensitivity, percentage change in extreme precipitation per 1 K increase in GMST is 6.6% (5.1% to 8.2%; 5%–95% confidence interval) for Rx1day and is slightly smaller at 5.7% (5.0% to 8.0%) for Rx5day." (Sun et al 2020)
Floods: Many sites with both statistically significant increasing trends and decreasing trends in magnitude of floods, from 1900 to 2014 (Do et al 2017). If the start point is 1955 or 1964, the dominance of increasing trends is very pronounced.
Tropical Cyclones: from 1979 to 2017, "the major TC exceedance probability increases by about 8% per decade, with a 95% CI of 2 to 15% per decade." (See also Scott Alexander's post on this, below)
Tornados: in US, flat national trendline from 1979 to 2016, but increasing or decreasing trends in different regions
Some of your dates are from 1900, while others have much later starting points. Was there a purpose or necessity of the later starting dates? For instance, I have heard that the 1940s were unusually cool, so the TXx going from 1951 to 2015 might be picked for a good reason, but might have been picked to show a higher effect. With floods that's more clearly so, as you note that by choosing a much later date than 1900, the effect is more pronounced.
Glad to help, and thanks for reporting back what you learned.
One of the key things to keep in mind is that prevalence / intensity can be important to understand, but ecosystem (and human) responsiveness and adaptation is non-linear. The plants and animals (and people) of monsoon regions might be able to adapt to a 20% increase in annual precipitation, but a 40% increase? ditto with other extremes.
It's not clear what the claim "extreme weather events are becoming more (or less) frequent means." Hot summers are becoming more frequent, cold winters less frequent. How do you decide how not or cold something must be to qualify, in order to have a count of each to add together? Similarly, at least one hurricane expert (Chris Landsea) suggested that hurricanes would be getting a little less frequent but a little more powerful as a result of climate change. Does that count as extreme events getting more common or less?
We don't have to get into those semantic debates. If you know of a good paper that separately studies the frequencies of hot and cold winters, or of the frequency and power of hurricanes, I'd love to read it.
Choice quote: "Tropical cyclone intensities globally are projected to increase (medium to high confidence) on average (by 1 to 10% according to model projections for a 2 degree Celsius global warming). ... most modeling studies project a decrease (or little change) in the global frequency of all tropical cyclones combined."
I don't have a source for frequency of hot or cold winters, and it isn't obvious how hot or cold should count as "extreme."
My point was that the statement "extreme events are becoming more frequent due to climate change" was probably meaningless, which suggested that people who say it are either not thinking clearly or being deliberately misleading.
If you search for words like "hurricane" or "drought" or "extreme" together with "frequency" and choose a site of interest such as "Judithcurry.com" or "wattsupwiththat.com" or "rogerpielkejr.com" or "realclimate.org" you will find much more information on the subject, often including references to data sources and methodology.
And Dr Ryan Maui has been collecting global cyclone data back the 80s which he's turned into some excellent graphs. Because his data hasn't reinforced the predictions of (some) of the climate modelers that extreme weather should be getting worse, he's been accused of being in the global warming denialist camp. But he's always declared himself to be firmly in the AGW camp. He's said that the data should speak for itself...
I haven't looked at the data from the latest years, but the 1970s were the peak decade for tornados. And the 00s were very quiet. This lull continued into the first few years of the 10s, but I don't know if there was an upward trend the last half of the 10s nor what's happening now that we're in the 20s.
I downloaded the CSV and counted the number of tornadoes per year, assuming that each row represents one tornado. There seems to be a strong upwards trend since 1950: https://imgur.com/a/oLWM5n8
Of course, some or all of this could be due to improvements in tornado monitoring. I'd feel much more comfortable with a reputable scientific paper than with plotting datasets on the web.
Yeah, raw data is often misleading on climate change topics, due to confounders such as changes to technologies, methods and comprehensiveness of coverage over time.
That's the primary data that researchers use — NOAA satellite data, temperature gauge data, and sea-level gauge data. You're just looking at it unmediated. As a government agency they're obligated to put it all on line. The trouble is trying to find it because I wouldn't say its very well organized. Interesting, though, because this isn't what I saw when I graphed the data a few years back. Not sure if I sent you to a different dataset than I was looking at. Sorry if I steered you in the wrong direction, but AGW is not a subject that has interested me lately.
Has technology for detecting tornadoes changed in ways that would create a positive trend? If in 1970 detection was mostly by a spotter seeing a funnel cloud and in 2000 it was mostly by Doppler radar, you might get an increase without any increase in tornadoes.
Matthew Barnett collected some interesting figures here, one of them shows fewer hurricanes making landfall, another shows less droughts/ devastation by wildfires (human countermeasures would be a confound ) . Worth checking out, and references are provided https://m.facebook.com/permalink.php?story_fbid=1197111504049534&id=100012520874094
Interesting, thank you! Titanium Dragon's quote from NOAA is particularly interesting, as it basically says hurricane data was too noisy and too unreliable in the first half of the 20th century to say much:
"Existing records of past Atlantic tropical storm or hurricane numbers (1878 to present) in fact do show a pronounced upward trend, which is also correlated with rising SSTs (e.g., see blue curve in Fig. 4 or Vecchi and Knutson 2008). However, the density of reporting ship traffic over the Atlantic was relatively sparse during the early decades of this record, such that if storms from the modern era (post 1965) had hypothetically occurred during those earlier decades, a substantial number of storms would likely not have been directly observed by the ship-based “observing network of opportunity.” We find that, after adjusting for such an estimated number of missing storms, there remains just a small nominally positive upward trend in tropical storm occurrence from 1878-2006. Statistical tests indicate that this trend is not significantly distinguishable from zero (Figure 2). In addition, Landsea et al. (2010) note that the rising trend in Atlantic tropical storm counts is almost entirely due to increases in short-duration (<2 day) storms alone. Such short-lived storms were particularly likely to have been overlooked in the earlier parts of the record, as they would have had less opportunity for chance encounters with ship traffic."
Titanium Dragon also says the following, which is fascinating, but unsourced:
"We basically only have 50ish years of hurricane data, and for reasons we still don't understand, the 1960s-1980s were a particularly quiet era for hurricanes. Start drawing your line from that era, you'll see an upward trend - but if you go back to the late 19th and early 20th century you see a number of extremely active hurricane seasons. Indeed, 2005 isn't even the all-time leader for ACE (Accumulated Cyclone Energy) in a season - the winner is 1933, and that's probably an underestimate as it was from the pre-satellite era. 1893 and 1926 are #3 and #4, respectively, and again, are probably underestimates (especially 1893)."
Another thing to note about hurricanes is that *damage* from hurricanes is going to be much, much more noisy than the hurricane data (which is itself noisy already). It can be predicted about as reliably as predicting the take from an average hour at a casino. Because while we might have some idea how many hurricanes there will be, the number of hurricanes that make landfall is pretty random, and the extent to which they hit major cities (where the big damage happens) is more so.
Why do you think that is relevant? I think that looking for further scientific papers on the unfolding environmental disaster is a sign of analysis paralysis. Extreme weather events are but a minor part of the overall picture, which extends further than the climate.
I'm not asking for further scientific papers; I'm looking for one. I think it's relevant because I want to put some numbers to statements like "droughts are getting worse" or "hurricanes are more frequent". Are they 1% worse, or 10x? The answer is important to informing my understanding of the world and the policies I'd support.
Are you looking for one paper, or the holy grail called Truth? Opinions differ; "figures don't lie, but liars figure." All "numbers" relating to historical climate data are subject to interpretation. You can find papers saying extreme events are getting more frequent, and you can find papers saying the opposite. Finding "one paper" ought to be easy.
I'm after the holy grail called Truth, but I have to start somewhere. If it's easy to find "one paper", could you link to the paper you have the highest opinion of? This could mean the one you think is most robust, most reliable, most comprehensive, most even-handed, most interesting, or some combination of the above.
Finding "one paper" is easy because there are so many. Finding "the definitive paper," if there be one, is not. I've seen many reports on the issue and disputes thereon, but it's not my hobbyhorse so I've naturally not kept track of sources; but I'd rely more on Pielke Jr. than Holdren. Google for their joint names and follow the bread crumbs…
There is no point to getting a precise quantification of such a small feature of the overall issue, as it would be an exercise in missing the forest for the trees.
"The Holocene extinction includes the disappearance of large land animals known as megafauna, starting at the end of the last glacial period. " (from the piece you linked to).
Humans have certainly had a large effect on the world, but most of that has nothing to do with climate change. Falling ocean pH, on the other hand, is probably due to increasing atmospheric CO2, but it isn't clear what the effects will be. Calling it "acidification" is technically correct but misleading, since what is actually happening is the ocean becoming less basic, shifting in the direction of neutral pH. But that can still be a problem for organisms adapted to their current environment.
That was a very misleading quote you pulled, Friedman.
> most of that has nothing to do with climate change
Exactly my point, climate change is only a piece of the overall mess.
> Calling it "acidification" is technically correct but misleading, since what is actually happening is the ocean becoming less basic, shifting in the direction of neutral pH. But that can still be a problem for organisms adapted to their current environment.
Maybe if there are people who interpret it as "the ocean will burn our skin off!". And who knows maybe there are. But even then, this seems like a nitpick.
I think a lot of the rhetoric around climate change is deliberately misleading, and linked with a wildly exaggerated picture of the actual evidence. "Acidification" is one example. I might be mistaken, but my guess is that most people who hear the term do assume it means the oceans becoming acidic, and that that is part of the reason for using the term.
What is misleading about the quote? The way you originally put it, someone who didn't know what "Holocene" meant would assume it had something to do with AGW. That quote, from the piece you linked to, makes it clear that it is the whole time period during which humans were affecting things.
That's fine, feel free to start a new thread about the radical policies you'd like and I'd read it. I'm interested in "such a small feature of the overall issue", but if you're not, you don't have to discuss it.
I think it is relevant, as it's a big piece of the discussion around climate change.
Besides, I don't understand why you are talking about "analysis paralysis"- about what? Paralysis for what choice? It is an interesting question in and of itself, isn't it?
There is no point to further discussion on climate change and environmental degradation (it's not just the climate), other than to just say to skeptics and optimists "Comrade Dyatlov... I apologize, but what you're saying makes no sense" (https://youtu.be/rFYbe91tPJM?t=130)
It's crystal clear at this point a huge part of the problem is that Dyatlov is terminally normalcy biased, and so the only discussion to be had is by the lucid, and only about how to evict the Dyatlovs from the control room, because whatever the solution is, Dyatlov has shown he will not contribute to it.
> There is no point to further discussion on climate change and environmental degradation (it's not just the climate)
Sorry, but you are not sufficient authority to people in general obey what you decreed. High quality scientific research is far more convincing. At least to me.
And there is huge gulf between "highly annoying to humans, will devastate wildlife" to "broadly speaking, people are fucked" - and here, from what I know, situation is not entirely clear.
Don't listen to me, listen to the IPCC saying "code red for humanity". If that's not good enough for you, then it would appear that nothing could ever be.
Im not sure I follow exactly what you're talking about, but I think if you are just shutting down questions of the sort of "are there papers on X" where X is related to climate science, then you're doing a disservice.
What I am saying is that no further analysis is needed on the matter, the thing is bad. Though that's not a reason to pause research, further research about establishing "exactly how bad" is not needed. The real question is why we aren't fixing it, why we're so akratic.
> What I am saying is that no further analysis is needed on the matter, the thing is bad.
So mimi is correct, someone is asking for information and you're intentionally trying to shut down this inquiry because you think the inquiry itself is unnecessary or "wrong", as if you get to decide how people spend their time or how they research topics or what topics should be researched.
I'm honestly baffled by this mindset. I assume you think you're being helpful, but I can't see that working from the PoV of the person asking for information.
How much would you pay for a deliberately boring news feed? Not that it shies away covering important topics, but that its goal is to minimize your surprisal when you read someone else’s headline.
No, because you'd still be surprised at reading other people's headlines, and that was the criterion. Presumably to minimize surprise at reading other people's headlines, you'd have to report the truth before other people do, which in itself would be surprising and not boring.
Then let me ask a related question; would you pay for a news source that was the anti-clickbait? Say if they wrote all titles as clear declarations of fact instead of obfuscation or questions, they pared the articles until they were terse, and if it's appropriate explain why this seemingly weird outcome is just normal business wearing a funny hat?
Matt Levine does a good job of getting to the point and explaining finance and law so that future headlines give you that "yeah, duh" feeling that Josaphat was describing. But he's got a definite and fun style, and his column is relatively long, so that's not exactly what I'm trying to describe, but he's closer than most.
Good question. I'd certainly replace the newspaper I'm currently subscribed to, and whatever I'm paying them. How much higher I went would probably also depend on what they covered. (e.g. I put effort into getting news that American media seems to consider too foreign to bother covering.)
It would help a lot if their journalists appeared to be at least as numerate as a bright junior high school student ;-) (I hesitate to aim too high. After some of the local covid coverage, I suspect most journalists are nowhere near that numerate ;-() What I really want are journalists who understand _at least_ statistics-101-for-non-majors.
I don't know about DinoNerd, but I tend to read Le Monde or Le Figaro once a week - but that's as much to keep my French up to date as to actually read the news. They do have better coverage of (Francophone, West) Africa in particular, and also of internal EU politics, though that's about as important as the equivalent "who's up, who's down" in Washington.
If you do have a language to the level where you can read a serious (non-tabloid, non-dumbed-down) newspaper in that language, then I'd suggest doing so because it will definitely report things that don't appear in the mainstream English-speaking media.
I mean, it’s exactly what *I* want in a newspaper. Maybe other people do too.
For example - and I apologize if this veers too close to politics - I watched both the 538 and the Crooked Media coverage of the 2020 Dem running mate selection process and who they thought would be selected and why, and it was very clear which group of people had been in the room where it happens. Armed with that knowledge, the next few weeks of headlines were entirely predictable.
A puzzle for those who enjoy such things: for which values of n is it possible to paint the vertices of an n-dimensional hypercube in n colours, in such a way that no point is adjacent to two points of the same colour?
Wait, this is asking whether you can n-colour an n-demicube, because hypercubes have no odd cycles and so the distance-2 becomes distance-1 in the alternation.
My guess is therefore "yes" for n > 3 since it can't contain an n-simplex (though it will contain (n-1)-simplices). Bugger if I know how to prove it, though.
Consider that if we have a correct coloring for the n-hypercube (in integers mod n), we can define an injection f from the vertices into the vertices such that color(f(v)) = color(v)+1 (at each vertex, choose the unique neighbor that has the next color - if the function so induced failed to be injective, this would mean n=f(a)=f(b) for a!=b, from which a,b have the same color and share the neighbor n, so the coloring is incorrect). This induces a sequence of injections from each color-class into the next, so all color classes have to be of equal size.
This means that our number of vertices, 2^n, is divisible by the number of colors n, so n has to be a power of 2.
I'm guessing that there's a general construction for n a power of 2, but I haven't been able to find it yet.
That's correct, and it's a much more beautiful argument than the one I was going to use. The general construction is:
Gur pbbeqvangrf bs rnpu cbvag tvir n fgevat bs a mrebf naq barf.
Ahzore gur pbbeqvangrf naq gur pbybhef va ovanel; pbybhe rnpu cbvag nppbeqvat gb gur kbe bs gur cbfvgvbaf bs gur frg ovgf va gur pbbeqvangrf - sbe rknzcyr, va 4q gur cbvag 0101 unf 1f va cbfvgvbaf 1 naq 3, juvpu ner 01 naq 11 va ovanel, fb gur kbe vf 10 naq jr pbybhe vg va pbybhe 2.
Is this related to the coins on a chessboard puzzle? (isomorphic to: for which values of n is there a solution to the coins-on-a-chessboard-with-n-squares puzzle?)
Hanania says that people go to grad school rather than become welders because they value status and influence above money. I didn't go to grad school for two years and get a master's degree because I wanted status and influence. That's silly. I did it because I wanted a white-collar career for which a master's was the entry credential, a career that I'd enjoy more and be better at than I would be a welder.
I think you're just unable to see the truth of the matter because it's too baked into your worldview. It's a career that you'd enjoy more and be better at than a welder because it's higher status. You wanted a white collar career because it's higher status. You'd like being around people with degrees because they're higher status.
There's a general college earnings boost and there are certain gated high paying careers (doctor, lawyer, scientist). But if you're maximizing pure ROI then a coding bootcamps or elevator tech schools or whatever often beat median colleges in lifetime earnings, debt to income boost, etc.
Try asking yourself why you want a white collar over a blue collar career. It's probably a bunch of classist assumptions. Not that there's anything wrong with being white collar! But it's clearly the more socially privileged workforce regardless of raw income.
Nonsense. I wanted a career in software development because I absolutely adored computers since I was vey young, and was never interested in cars or ... what else do blue collar workers do? Anyway, higher status? ppsh. *farts loudly* My car is a rusty 1999 Ford Taurus that squeaks really loud.
I mean, you are right in some sense, but.... I am socially liberal and another important parts of my worldview are that 1) I do not want to do hard physical labor and 2) I want to do intellectually complex things instead of dull repetitive tasks.
It is also of course true that ceteris paribus 3) I want to do higher status jobs rather lower status jobs, but 1) and 2) are imho sufficient explanations why I would require hefty wage increase if someone would want to poach me to a welding job.
Statements like these reek of people who don't actually come from a blue collar background. My parents pushed me into college education and white collar work because my dad had seen his own body, his father's, his grandfather's, and his great-grandfather's bodies and energy destroyed by a lifetime of manual labor. Even skilled manual labor can destroy you.
And I'm damn glad they did push me. As it turned out, I ended up having quite nasty spine problems in my 30s. There was a period of 3 years where I could scarcely get out of bed some days and couldn't put my own shoes on. But I was still able to work through a lot of that period, which I absolutely would not have been able to do if I'd become a welder.
Only if they're more productive in white collar professions. See how classists assumptions are so baked in you think they should go unstated? You're presuming a smart person is more productive in an office than working with their hands. You're also conflating IQ and education.
Same. I'm getting through my STEM degree 'cause I want to get payed more for what I can already do, and because people in trades often retire totally physically broken.
Same here: I did a two-year masters solely because I wanted a much more interesting job and a little bit more money (it worked, hooray) and I don't think a huge deal of status came with it either. A more fun answer to the "so what do you do" question at barbecues, perhaps. This was, admittedly, STEM.
Another reason, apart from status, power, money, interestingness of the work and not wanting to breathe in zinc vapour all day: I'd rather work with folks with higher level degrees than with welders. (no offense to welders)
Anecdotally, that seems true of my own social circle as well. The stereotype about people who flip back and forth between grad school and being cashiers or baristas is accurate to my experience.
I'm not exactly sure what being a welder entails, or how one gets into it, but my vague impression includes standing in a garage a lot wearing protective equipment, and possibly moving heavy objects around.
I do think there's something to the idea of conservatives wanting to be able to make money sooner, so that they can have more kids, possibly at the expense of quality of life later on. So you have men who can apprentice early in construction sort of trades, stay at home moms who need a husband with steady income, and (later in life) those on disability from working in physically stressful jobs -- all very conservative categories. Which is a life path focused on getting started as an adult with a family much earlier in life, vs getting a grad degree and marrying at 30.
Having taken welding classes and learned to code, just as a way to spend your time, if you must work, coding all the way. Welding is also hazardous to your health.
Yeah, I feel like the undercurrent within my childhood circle of blue-collar overachievers was something like “We must excel at all things, else we be menials!” My dad's aspirations for me were to get a job where I wouldn’t have to work out in the rain. It really could have gone either way; if I hadn’t been “a student” as my family called it, I’d be doing a blue-collar job, too. It was largely luck that made me a nerd, and going to grad school and into a white collar job was more or less deciding to stay on Standard Nerd Track.
mailman; best blue collar job in America. used to be better but rationalization has turned the letter carrier into more of a machine. fresh air, sunshine, a walk in the park, depending on neighborhood. "Neither snow nor rain nor heat nor gloom of night stays these couriers from the swift completion of their appointed rounds." the snow, the rain, nice hot days, they were all great. opposite to welding or sitting all day coding as far as health outcomes. In my 100,000 rustbelt blue collar town it had status because it was relatively high paying and secure, and the moderate physicality signaled health. also competitive to get in. mostly memory and attention to detail testing. you had to score pretty high to have a chance. In Cleveland one year 10,000 people showed up to take the test.
Interested in hearing peoples first hand experience with brand Adderall vs generic. I had been under the assumption that generic drugs are always identical to name brand. I started on name brand Adderall a couple months ago and then recently switched to a generic version. I felt like the name brand was less effective, so I did some googling and there are other people who feel the same way, citing things like different filler ingredients:
I've gone back to name brand and the effectiveness has returned. There should be no difference in the active ingredient so I am a little confused. I am open to it being a placebo effect, but I noticed the decrease in effectiveness while I still believed them to be equivalent, so that makes me a little skeptical.
Scott had a long article about adderall on his practice’s site - it notes different experiences with amphetamine brands are often results of the difference in absorption rate into the blood form the gut, which is mediated by the packaging and salts and a number of other factors. That could’ve contributing. Note the variety of differences in the section about different brands and formulations. There’s also isomer differences which might be relevant but idk. I’d personally recommend against stimulants in any cases, but there’s some info.
Have you ever tried a double-blind test? Like, Buy some adderall and some generic; have somebody randomly assign them to days; take them without looking so you don't know which you have taken; try to tell afterwards which one you took today?
Can't speak to adderall, but I remember at one point definitely getting the impression that one type of Concerta was significantly less effective than another. The pills looked different, and I forget if I just had to live with it or switched to another pharmacy that could get the one that worked. Don't remember more details because this was quite a few years ago.
My wife and present house guest both have ADHD, and from what they've told me, it's not some general difference between name brand and generic drugs. It's specifically that Adderall made by Aurobindo is trash, for some reason or other having to do with the fillers.
Very much this. Aurobindo has been cited multiple times by the FDA for failing to meet manufacturing standards. Also, generics do not have to be "exactly identical" to name brands, they just have to have ±20% of the active ingredient in the blood stream, with a 90% confidence interval:
"To demonstrate BE [bioequivalence], the statistical analysis must show that the ratios (generic to RLD) of these parameters remain strictly within a 90% confidence interval of 0.80 to 1.25"
I have heard that also. Somebody I know who had initially taken Adderall, & then found the generic less effective, spoke with a pharmacist who told him that the Teva generic is made by the same company that makes Adderall and is essentially the same stuff. Person tried Teva generic and believed that it did indeed feel just like Adderall. He also commented that it tasted just like Adderall, whereas the other generics did not (he breaks his tablets in half, and sometimes just bites them in half, so is familiar with the taste).
As a soon-to-be parent, I’d like to ask for recommendations for parenting books. However, I’m looking for a specific sort of parenting books - which I call Outlier Parenting or Extreme Parenting.
Let me try to map out what I mean by Outlier Parenting.
I would define it as: Parenting that seeks right tail results via right tail methodology.
Outlier Parenting is not data-driven. This is because outlier parenting is rare enough not to have enough data points for analysis. Whereas many parenting books speak to the mainstream parent and focus on the normal distribution of child outcomes, outlier parents - both in their behavior and in their goals - are seeking to be on the far end of the right tail of the distribution.
As such, I would expect that each set of Outlier parents have their own specific ingredients and methods for parenting.
Example: Scott Alexander had written about and reviewed Polgar’s “Raise a Genius!” I found a lot of value in the book. But I want more… much more.
Another example: First 30 minutes of Captain Fantastic. Ok, fine, this isn’t a book, it’s a movie. And it’s not even a documentary, it’s fiction. But that’s an example of the type of Outlier Parenting that I would love to read about.
I’m not looking for books that tell us how mainstream education is bad. At this point, the existing education system has become a straw man for people like me.
I’m also not looking for books on homeschooling. Whereas “traditional schooling” has become so rigid that we know exactly what to expect, “homeschooling” as a term is so undefined that it can mean 1 million different things in 1 million different homes.
I’m also not looking for books on unschooling, unless unschooling means lots of goals, tons of work by both parents and children, and at least some structure.
I’m also not looking to engage in the nature versus nurture debate. Clearly (I think), my target for parenting books will focus more on the nurture side.
I’d like to read books by people who set out to raise superhumans. I’d like to know the details of their methodology, experience, and the results and lessons.
If such books don’t exist, I would appreciate any other leads (blogs, diaries…) Thanks!
I would read Alison Gopnik. But she is going to tell you, (as is just about everyone else who has had success working with kids), that trying to build your kid into something specific is a heart-wrenching journey.
It depends on what you mean by "right tail results." To me "right tail" denotes something very improbable. Is this raising kids who will thrive in an improbable future? Then they should be adaptable, have lots of grit, take initiative etc. Or do you have a specific right tail risk in mind? In either case, I think child psychology books would be a good basis because you will want to consider things like the effect of peer group vs family group, or what periods in a child's life are most sensitive for the development of language etc. I am curious what you would consider extreme parenting. My kids have spent their entire life so far in African countries, and I was off-grid, 45 minutes from the nearest phone when I was a kid, and I would not consider either of these situations as extreme, because many people grow up in this way. The book "Child of the Jungle" is about one form of extreme parenting and the difficulties the author faced when reintegrating into modern society.
Thank you for the book recommendation. This does indeed seem like an example of extreme parenting.
When I refer to "right tail", I'm not speaking in terms of risk, but in terms of a general normal distribution, i.e. the right tail of aptitudes, outcomes, time investment, happiness, productivity, etc. For example, let's say a set of parents conclude that the +- 2sd range for daily "high-engagement learning" is .5-4 hours, and decide to invest x time and y resources to provide their child with 10 hours per day of "high-engagement learning". You get more of these stories about golf and tennis, which are interesting... but I'd like to find out what else has been tried, how it was tried, and what were the outcomes.
Your comment that many children are raised off-grid is accurate, in that there are 10s of millions of children that may fall in that category. The difference with a "Western" parent choosing this path for their child usually will require quite a bit of intentionality. This intentionality may be selfish or out of necessity (less interesting for my purposes), or with specific goals for the child in mind. I'm seeking out stories (methods, outcomes, successes, failures) of the latter.
Parents have a natural drive to invest in their offspring and it seems that any parenting trick that leads to greater outcomes for the children will be copied by many.
Not sure how far out on the right-tail my kid will ultimately get, but he’s doing quite well so far. We haven’t been obsessive about it, but we’ve been decisively in favor of him developing any latent superpowers he might possess. You’ll find that a lot of parenting is improvising. There really aren’t any grownups. Everyone’s kinda making it up as they go along.
For what it’s worth, my experience consists of raising one boy who’s currently 16. His whole life, we’ve regularly gotten effusive compliments from other grownups (teachers, parents of his friends) on what a great kid he is. Most of that is just him being his excellent self, but here’s a few suggestions on things that seemed to work well from a parenting perspective.
We only had “the one big rule”: Don’t Get Hurt. Too many people have too many rules for their kids. (He’s always had empathy, so we never had to state the complementary rule “Don’t Hurt Other People”. Suggesting “that might hurt so-and-so’s feelings” was enough.)
Get in the habit of carrying a handkerchief: it’s super-useful when they’re really little.
Our kid talked early and often, but he could understand speech and use a few basic sign-language moves for a couple months before he started speaking; the signs for “more” and “all done” are especially useful. The sooner they can consciously communicate, the better.
Good manners will take you a long way, and they don’t cost much; bad manners can get real expensive real quick. If you practice good manners at home (just simple old-fashioned stuff like saying please and thank you to your significant other for every little thing, for example) your kid will soak that up, imitate it without even thinking about it, and get more cooperation and extra respect from most other people with very little conscious effort for the rest of his or her life.
Only say no when you really mean it, and always explain why. None of this “because I said so” bullshit. If you don’t mean “absolutely not, that’s a flipping horrible idea and here’s why” then don’t say no. Say “not today” or “maybe, if we have time” or “I’d rather you didn’t, because,” etc.
Don’t bullshit your kids - I mean, believing in Santa is kind of a fun game (and the eventual disillusionment gets them used to the idea that mythological-sounding stories probably aren’t really true) but, in general, give them as much of an honest answer as you think they can handle, for any question they ask. If it’s something you don’t want to explain, you can explain that (i.e. “oh, that’s a gross joke about sex - I’d rather not go into the details, ok?”)
Praise and thanks are best when immediate and specific (“thanks for helping clean up for the party - yeah, stuffing most of your toys into the closet totally works. That was a good call.”)
Correction should be mild and certain, and involve a dialogue, not a lecture: “we left the party and you’re going home to have a time-out because you bit that kid. Oh, you bit him because he was holding you down while that other kid punched you? Ok, that’s a pretty good self-defense move; I can see why you did that. No, we’re not going back to the party now. I mean, a party where you get into a situation like that, that’s not a good party. Well, I’m sorry you didn’t get to have cake, but there will be other birthday parties.” Time-outs were 1 minute for every year of age: hardly ever had to use them, never after he turned 6.
When possible, let them have a turn calling the shots. “Do you want this for lunch, or that?” “What do you think we should draw?” Kids have so little control over their own lives, and they need all the practice they can get making decisions. The sooner and more often you can allow them to exercise some control, the better. “Do you want to go on this ride, or that one? Or maybe that other one first?” (Pro tip: it’s also a great sneaky way to steer them away from stuff, by not listing options you don’t want them to choose while giving them something else to think about and a gratifying feeling of agency.)
Prioritize giving them your attention and being patient. They will want to tell you about all sorts of things you may have little interest in, and it is a pain in the neck when you have to get up in the middle of the night and change the sheets because they wet the bed. Patience and empathy are essential virtues here.
You will have occasion to apologize to your kid: I recommend short, simple, direct, slightly on the formal side but sincere. “I’m sorry mommy and I were squabbling; I’m sure that was no fun for you. People just step on each other’s toes sometimes - I think we’re all settled down now. But I’m sorry you had to listen to us yelling.” (Still happily married, btw!)
Hope this helps - everyone’s got their own row to hoe, YMMV, etc.
"Bringing up Bebe" is my favorite parenting book, but I wouldn't recommend reading books that are directed at parents of kids much older than your own. They will give you a long list of things to worry about and dread, but no individual kids hit all the lowlights of parenting. Better to wait until you learn what particular problems and opportunities you will actually face. Remember, no plan survives contact with the enemy, and your parenting will likely be dictating in large part by your kid.
You're not "trying to get into the nature vs. nurture debate", right, but it's not a debate anymore (scientifically) and hasn't been for 20-30 years: Parenting doesn't have much influence on the basic characteristics of children (personality, intelligence, etc.). So if as a parent you want your child to excel in a specific area, you can "only" increase the amount of time he/she spends practicing that thing, which means doing less of other things, This also implies deciding for the child what area he/she is supposed to excel in. And since education doesn't change intrinsic abilities, if your child isn't quite good at the area you've selected for them, they won't in fact excel.
Anecdotally, I watched Captain Fantastic with my 16 and 12 year olds a few months ago and they thought the kids' education was horrible.
Thank you, Emma, for your recommendation of books on Outlier Parenting. In response to your comment, I can only surmise that your children were genetically predisposed to disliking Captain Fantastic's alternative education methods.
I'm sorry I wasn't helpful. I disagree with your goal, not because I disagree with your goal (every parent has different values about education, of course), but because I feel that your goal is based on a false premise, that it is possible for parents to make geniuses out of their children. I think that it is not possible, because a huge amount of studies say that parenting has little influence on the things that are, in my opinion, associated with geniuses.
If you really are interested in "lots of goals, tons of work by both parents and children, and at least some structure" you could try "Battle hymn of the Tiger mother" by A. Chua, who explained how she did just that with her daughters (and with, in my opinion, previsible results).
Emma, Thank you for your suggestion and sorry for the snide reply.
The only personal goal that I had stated in my original post was seeking a specific type of literature. Beyond that - my two examples were of a parent who focused on chess and a parent who focused on liberal arts and wilderness survival. How somebody might guess my specific conclusions and child rearing takeaways from those two works is beyond me.
I'd be lying if I said that I wasn't disappointed by the fact that most of the replies to my query were of the discouraging "your children will hate you" sort. I didn't expect such a closed and anti-curious response from this community.
I will also say that despite its title (Raise a Genius) and everything presumed about my goals, any parent who reads the Polgar book and finds nothing of value in it perhaps is not best qualified to comment on parenting threads. And that includes even the most ardent genetic determinist.
Not being interested in the nature vs nurture debate means that you might potentially waste a lot of effort trying to raise a genius child and create long-lasting resentment towards you in your kid.
This isn't really about outlier parenting in the sense that it's about trying to produce geniuses, but its advice seems to me really essential for anyone with that particular aim: Alfie Kohn's "Punished by Rewards," or perhaps some shorter works by him would work too. The argument is that rewarding or even just praising children for doing what you want often backfires. Not a parent, but I found it convincing.
Oh, that's, uh, not the recommendation. Punishment and blame are just as bad as reward and praise.
The stuff about rewards is mainly about things where you want the child to *want* the right things, or *like* the right things: so rewarding or praising children for studying or learning or being kind to others is a bad idea, because you need them to end up wanting to do those things for their own sake, so you don't want to make their motivation to do those things end to get tied to rewards or praise. He cites a bunch of research showing that both children and adults perceive activities as less intrinsically desirable after getting rewarded for doing them--it's like they reason that, if they have to reward me for doing this, it must be because it's not actually desirable to do.
I don't remember if he had detailed suggestions about what to do when the kid does bad things, but I'd guess his suggestions there would end up closer to the liberal mainstream (e.g., try talking to them rather than a punitive approach, whenever possible).
Question, how many soon to be parents are on this blog? Because I seem to see this or adjacent questions quite a bit here, so that makes me curious. (And pls keep the recommendations coming, although I'm personally some years away before it'll be relevant for me)
I’m on the other side with a 3 month old at this point. My life is definitely scrambled but there’s fun moments in between a major Increased level of effort in home life.
I don't know you and this may go without saying but I hope you've considered how to handle a child who ends up unable or unwilling to deliver the kind of outlier results you're hoping for. It kind of sounds like you're planning for your kid to be a genius before they're even born and that seems like it risks a lot of stress and conflict if things don't go as well as you're hoping.
Try Bruno Bettelheim's "The Good Enough Parent." It's not a how-to about parenting but more like a how-to raise a child into an excellent adult, in all the meanings of that word.
The child prodigies on YouTube are usually excelling in something their parents already do well/love/devote significant time to (the physical skill ones at least). The parental involvement, dedication and ability to impart technique elevates the situation above “talented kids taking lessons.” My impression is it takes more than one generation to pull that off. And you have to put the kid in it very early to make them feel like a fish in water in those skills. So whatever it is that you do, live and breathe, those are probably the candidate fields for superhuman status for your child. If I’m thinking of the right movie, the superhero grew up living it with every moment an opportunity for additional skill, so she became next-level. The child’s natural aptitudes can influence it too. But learning self-care outside the social norm is also key, because kids in the US are bombarded with messages about the importance of goofing off, the emotional necessity of wasting time. So to have them learn to identify and handle emotions, be rigorously and positively honest and release stress, will separate them from a super kid who will burn out at 19 from a super kid who can transition to adult success. That’s a hurdle many people can’t clear.
One of my kids is startlingly good at something which he recently connected with, the other is still looking, I am probably an example of burnout, so this is is from the perspective of “what not to do” as well as “what to do.” Workaholism not at all identical to what you are talking about.
Venus and Serena Williams come to mind, Tiger Woods to an extent. Their backstories include some of the factors you mention. Also bassist Victor Wooten. If I think of any others I’ll post.
"The child prodigies on YouTube are usually excelling in something their parents already do well/love/devote significant time to (the physical skill ones at least)."
I think that this is a very important point! I was impressed by the oppressively strict parenting style of Amy Chua, which she described in her bestseller of about ten years ago, The Battle Hymn of the Tiger Mother. To keep things short: A Chua, who is a law professor , forced her daughters to spend all their free time practicing music, so that they could excel. Right now, her two daughters work in law.
Something that is consistent with your examples is Polgar's clear focus on a single discipline in his approach. I would be interested if there are documented multi-disciplinary approaches that show promise.
Polgar's clear focus on a single disciplines still had a space for some extra things. His proposed ideal day: 4 hour chess (or whatever is your specialization), 1 hour languages, 1 hour computer science, 1 hour humanities, 1 hour physical education.
I imagine you could tweak it to include two or more specializations. The easiest way would be to switch the specialization every two or three months; rotate between two or three specializations.
Also there’s a type of radical acceptance which I think is different from the “I make my kid into a genius.” More the “down to the last detail, what environment can I create that will have as an end result the child having necessary skills to achieve?” Of course the kids crack up later if you live your dreams through them, if they become a robot they crack up when they meet adult society, if their actual skills and needs are not honored they are always at cross-currents with themselves in a detrimental way. If it’s power-over, then in order to become fully adult they have to reject everything you taught, which is not the goal. But that being said I think recognizing and leveraging those things is 100% possible. There are very highly competent people with a wide range of initial talent/interest, however that’s measured.
Question: What is the difference between life on Earth and life on a large, luxurious, self-sustaining spacecraft lost in space?
In the movie/poem Aniara, a large (4,750 meters long and 891 meters wide) and luxurious passenger spacecraft (named Aniara) leaving Earth for Mars is hit by some space debris. This results in the spacecraft losing control over its trajectory and they are now unable to return to mars nor earth. The spacecraft is self-sustainable, but after a few months the passengers are reduced to eating algae. The movie explores how meaningless life seems to be for everyone on-board.
I suspect that most people, like myself, believe that life on Earth is meaningful while life on Aniara is not. I find this to be strange since ultimately both Earth and Aniara is just a large thing floating in space, so somehow life should be equally (not) meaningful. So, what are the key differences between Aniara and Earth? Or in another word, what features should Aniara have to make life feel meaningful?
Another interesting question is how the psychological aspects of life on Aniara compares with early humans' life (under any reasonable interpretation of early humans). Is the psychological struggle of Aniara similar to the struggle of early humans? Perhaps this is why religion is so universal?
Many people derive value and meaning from doing useful work, where "useful" means making something better than it was before. Some people do this through their job, others just use the job to pay the bills while they raise children who they hope will exceed their parents. And some will substitute "avert catastrophe" for "make something better". But if you're on a spaceship which is A: near capacity and B: safe and secure and C: definitively not going anywhere, then you've got precious little avenue for any of those. Which is likely to lead to ennui, despair, and maybe nihilistic religions.
There are also people who are content to coast through a comfortable life, accomplishing little, but those are the people least likely to find themselves on a spaceship. A luxury liner, maybe, but even there you're probably selecting for people who want either the experience of seeing strange new worlds. or to come home with the status of having made the trip, and you're losing both of those as well.
Also, with apologies to Harry Martinson, "Aniara" is no longer a ship, it's a fleet - the last survivors of Sjandra Kei, pursuing the Blight to mutual annihilation by the Countermeasure and the Godshattered remains of Pham Nuwen. If your world is beyond hope and what remains of your life is in the depths of space, that's a much better path to follow.
Hmm, not really a recommendation. I don't much like it, but there is a genre of 'lost in space' stories. "Rendezvous with Rama" by A. Clark... there must be hundreds of others.
-Amount of mystery, sense of things to discover: Lots we don't know about Earth & its inhabitants and history.
-Otherness: Earth was not planned and built by us, LLSetc. was
-Temporal depth: Earth is vastly older
-Human temporal depth: Vast number of members of our species have lived on Earth before us. Knowledge of their past presence enriches our experience.
-Ties to our human past: Bones of our lost loved ones are on Earth, not on LLSetc.
-Number of possible present and future human ties: Way more possibilities on earth.
-Cultural depth: You can take books and music and art onto the LLSetc., but they are about life on Earth, not life aboard the LLSetc. LLSetc. has little cultural depth.
- Size and variety: What if Aniara is the size of Texas?
- Amount of mystery, sense of things to discover/otherness: I think you're onto something here but mystery is not quite the right word. For instance if Aniara grew every year in some random way (e.g. an AI generating a lot of new random rooms every year) then we will have plenty of mystery, but I think it's still meaningless.
- All points related to the past: Yes this is an important point, and is related to my question regarding early humans. The issue is that some of the earliest homo sapiens (is this even well defined?) cannot rely on this. Do they experience this meaninglessness as well? Or perhaps they're too occupied with the terror of death?
Now that I think about it is this more about sense of security? For instance if I was born in Aniara as the 100th generation then I'll probably be convinced that life is going to run for 100 more generations, and somehow this generates meaning. So history is only useful to convince ourselves that there's a future.
Also for what it's worth there's some amount of cultural depth (new arts, religions, etc) in Aniara but clearly not enough.
This point about the future generating meaning is an interesting one that was first clearly pointed out to me by an article by Samuel Scheffler in the New York Times, discussing the movie Children of Men. I think he explores this theme further in his book from around that time, but I haven't read it yet (and unfortunately, I never actually took the time to get to know him, even though he was a professor in my graduate program - I thought of him as working on very different things from me at that time).
Thanks for the book recommendation! It seems extremely relevant and I will definitely read this. (The book is Death and the Afterlife for those too lazy to find it)
Do you have anything specific in mind that will constitute as plenty of mystery? Maybe if it somehow generates forests/mountains/caves? I guess at that point that'll be so different from our life that it's hard to imagine how that'll feel.
Well -- you could have AI perfectly reproduce parts of Earth, landscape complete with fossils, vegetation, etc. But that idea's cheating I think.
You could have the AI carry out some processes that are approximations of some on Earth. For example, whatever it is that waves do that scupt natural caves, form the beach into ripples, etc. So then instead of rooms you'd have areas of the ship that were beach-like, rockface-like, etc. Areas would be irregular but not random, like so much of earth is -- mt ranges, waves, coastlines, tree branches. Spaces where you can't predict, just from the features of one area, what's happening at the adjacent spot -- and yet the human eye senses the order there, the deep nested regularities. You know, fractals etc.
Or you could have LOTS of mystery and richness if the AI set some organic process going. -- some living thing that reproduces and evolves and uses some of the ship for food and then creates new parts of the ship as it goes about building nests, discarding waste etc. Problem is, the organism is going to view the human occupants as just raw materials for its life. So now instead of Mall of America you've got life in the movie Alien, which sucks marginally more than M of A.
I haven't seen the movie, but from the summary on Wikipedia it seems like life on the Aniara is neither "luxurious" nor "self-sustaining." First the ship loses propulsion, then the VR system they use to make life on the ship tolerable breaks down, then they're reduced to living on algae, then the power goes out, and then everyone dies. Life seems meaningless because the ship is doomed and it's only a question of how quickly they'll die, and how much they can lie to themselves about it. That seems more salient than simply "being on a spaceship" - if the ship wasn't breaking down, it would just be one of those neat sci-fi stories about life on a generation ship.
Like, I don't think people would say the Quarians from Mass Effect live a life devoid of meaning, even though they not only live on spaceships but spend most of their life inside environment suits because of the danger of disease. Their society is stable, self-sustaining across many ships, they've developed a culture around their ships and suits, and they even have the hope (slim, until the player intervenes) that they might once again see their homeworld.
I would say the Aniara seems meaningless because it's not only doomed, but doomed on a human time scale rather than in the sense of "in a million years the Sun will engulf the Earth." And indeed, what Aniara reminds me of on Earth would be the extreme climate doomers - the people who say "I don't want to have children because they'll be born into a world that we've destroyed."
It's one thing to know that, in some distant, abstract sense, everything you do is impermanent. It's another thing to know that the inevitable doom could happen in your own lifetime.
Yes, I intentionally gave a more generous description of Aniara to make the question more compelling. In theory Aniara is really self-sustaining if it's (manually) maintained well enough, which didn't happen in the long run because, among other things, people start killing themselves a few years into the trip. So in the movie the societal collapse induced the infrastructure collapse, not the other way around.
I googled a bit and it seems like the number of humans necessary for a sustainable colony is surprisingly small (all of the numbers I've seen are comfortably below 1000), so in theory they're not doomed on a human time scale. But I agree that most of the passengers feel doomed in the human time scale and this is a huge part of the equation.
Your point regarding extreme climate doomer is very good. I suspect most people with extremely pessimistic view of their life or the world in general doesn't see meaningful difference between Earth and Aniara.
This story sounds overly pessimistic. In general, people facing hardship tend to be remarkably good at persevering and ingenious in coming up with reasons to do so. Countless examples demonstrate this.
I suspect the feeling of meaninglessness comes from having specific goals that you care a lot about but don't think you can ever achieve. From your description, I'd guess goals are probably
1) Make significant long-term improvements to your society (the Aniara)
My impression from the summary was that these are the main problems. (1) - life is now, for our people, about as good as it will ever be for our people, with no hope of improvement. (2) - most people are not in our group, and our group will never know what any of them are doing or thinking or caring about.
I see. I guess 1) is not possible because there are no resources to do, say, non-theoretical scientific research? I think for a very long time essentially all humans don't make significant long-term improvements to their society. It's still true now but it's especially true thousands of years ago, I think. Is the goal/hope of doing so sufficient to generate meaning? For what it's worth I also think that the typical human's goal is way more local and modest: Do good things for me and the people I like. Perhaps your point is not that I cannot achieve goal 1), but that no person can achieve goal 1) and hence society will be "bad" forever? (e.g. It's not that I can't make a new iPhone, but it's the fact that no one can make a new iPhone hence society is bad)
I agree with point 2) to an extent. Again I think most people are mostly interested with the more local things i.e. reuniting with family and friends.
Thanks for the answer! This has been bothering me for many weeks.
I think a lot of people hope that their children will have a better life than themselves. I guess that doesn't necessarily imply that society in general is improving, though it requires that there are opportunities for at least local improvement.
Of course, one also should not discount the extent to which the storyteller is framing the story to evoke a particular feeling that might or might not be how people would actually feel about it.
I don't think your question makes a lot of sense. Undistributed earnings can be distributed to shareholders or reinvested in the business. Sometimes it is invested unsuccessfully (Time Warner purchasing AOL) and sometimes it's successful (Amazon). When it's successful, it just turns into more undisturbed earnings.
I think the boring answer is that US equity investors have historically earned a premium of around ~6.5% over Treasuries. Looking at research on the equity risk premium might help answer your question.
Thinking about your question more, I think you're asking if an accounting principle (retained earnings) is correlated with future equity returns. There's a huge world of research there, starting with Fama and French's factor model
I’m mostly interested in the observed correlation strength between undistributed earnings and later dividends/capital gains for the recent decades. Is there a reference you can give me for the US companies? Thanks.
Anyone here have experience/knowledge of NMN? I'd in particular be interested in:
1) An ELI5 explanation of what it does and how that is supposed to slow down aging. I've tried to look around a bit, but a lot of what I find is way, way over my head.
2) Relative estimates for how likely it is to do anything helpful/harmful.
3) A good (preferably high status) place to point family members towards to convince them you're not just taking creepy drugs because the internet told you to.
ELI5 - we know that NAD+, a critically important substance for every living cell, declines with age. The idea is that by providing NMN, which is a precursor (raw material) to NAD+, you'll restore the NAD+ levels to those of a younger person. This is promising not just because a critically low level of NAD+ will kill you (the levels are not _that_ low), but because the level of NAD+ regulates the activity of many enzymes. Thus, providing a level closer to a young human may, in fact, make you function more like a young human.
The main failure mode here is, breaking the thermometer to solve the room getting hot. The entire field of aging research knows a lot of correlations, but not many causal chains. There are plausible mechanisms how it can mitigate the impact of some age-related diseases. It's also possible that the entire thing is a red herring and the low NAD+ levels are incidental.
It looks almost certainly safe, and either placebo or somewhat beneficial. I wouldn't spend my life savings on it, but would consider getting it at non-ripoff prices.
Unfortunately, all actual high status places I've found are PubMed, which your family members probably will find baffling. There was at least one clinical trial: https://clinicaltrials.gov/ct2/show/NCT02678611
I would very much like to request Scott - or someone else who feels they could tackle it - to do a post about the history of the teaching of science in schools, particularly with reference to the vexed question of Human Evolution.
I've had some small exchange of views on this in another comment thread, but I don't know enough. All I really know is (1) the Scopes Trial happened in 1925, and there was such a lack of people lining up to hire lawyers on the basis that "I was fired simply for teaching the science!" that they had to advertise for anyone wanting to take such a case (2) "Survivals and New Arrivals" by Hilaire Belloc in 1929 twitting the Protestants over such controversies (though, being Belloc, he backs Lamarck over Darwin) in criticism of Biblical Literalism:
"The Literalist believed that Jonah was swallowed by a right Greenland whale, and that our first parents lived a precisely calculable number of years ago, and in Mesopotamia. He believed that Noah collected in the ark all the very numerous divisions of the beetle tribe. He believed, because the Hebrew word JOM was printed in his Koran, "day," that therefore the phases of creation were exactly six in number and each of exactly twenty-four hours. He believed that man began as a bit of mud, handled, fashioned with fingers and then blown upon.
These beliefs were not adventitious to his religion, they were his religion; and when they became untenable (principally through the advance of geology) his religion disappeared.
It has receded with startling rapidity. Nations of the Catholic culture could never understand how such a religion came to be held. It was a bewilderment to them. When the immensely ancient doctrine of growth (or evolution) and the connection of living organisms with past forms was newly emphasized by Buffon and Lamarck, opinion in France was not disturbed; and it was hopelessly puzzling to men of Catholic tradition to find a Catholic priest's original discovery of man's antiquity (at Torquay, in the cave called "Kent's Hole") severely censured by the Protestant world. Still more were they puzzled by the fierce battle which raged against the further development of Buffon and Lamarck s main thesis under the hands of careful and patient observers such as Darwin and Wallace.
So violent was the quarrel that the main point was missed. Evolution in general—mere growth—became the Accursed Thing. The only essential point, its causes, the underlying truth of Lamarck's theory, and the falsity of Darwin's and Wallace's, were not considered. What had to be defended blindly was the bald truth of certain printed English sentences dating from 1610."
So what happened between 1860 and 1925? What was the state of acceptance of the Theory of Evolution and how was it adopted into school curricula? Did the Northern United States teach it where the Southern states did not, or was it just that there wasn't a big splashy trial in the North? I'm aware of how the science is new and exciting but it hasn't made it into the school textbooks yet, so I'd like someone smarter and better-informed to trace the path of development from "Darwin says I'm a monkey's uncle????" to "sure, of course all our school textbooks contain this!"
Did it get taught earlier in Europe? Was America an outlier? What was the state of play in 1925? Because I have no objection to being called a troglodyte who wants to drag everything back to the Middle Ages, but I'd like to see some *facts* on this rather than the pop culture version of "Ordinary high school teacher doing his job was dragged into court by the ignorant Bible-bashers".
(Nobody has called me a troglodyte, just to make this clear! The other party was exasperated but polite!)
This guy: https://drakelawreview.org/volume-49-no-1-2000/ attributes some of it to the Protestant reformers casting about for another victory after succeeding with Prohibition. As to why the Catholic tradition is less literal than the Protestant, I think Catholic intellectuals are just smarter than bog-standard Protestant preachers, and I speak as a Protestant. I mean, Augustine said "C'mon, people, you don't have to take Genesis *literally*. Sheesh." (That may be a paraphrase.) One commonly-held view about why the U.S. Supreme Court has so many Catholic justices is that they have the conservative views about abortion that the Evangelicals want, but they're well-educated and thoughtful, so safe-ish to have on the Court.
What I really want to dig into and get at is "what was the state of teaching biology, including evolution, in the 20s?"
Because what it seems like - and I could be very wrong, which is why I want someone more scholarly to dig into it - is that local politician gets up on his hind legs and has act passed, everyone says "sure, right, whatever" and proceeds to ignore it - and this was in Tennessee, where the famous trial took place, and which then gained the reputation of redneck ignorance and "Science versus Religion".
But *was* Evolution by Natural Selection 'settled science' in 1920s? What was being taught elsewhere? Were there similar acts in other states, we just never heard of the one in (pulling this out of the air) Vermont because nobody had a show trial there? Darwin's particular theory suffered an eclipse up until the 20s so that a state wasn't teaching *Darwinian* evolution does not mean it wasn't teaching evolution *at all*.
Basically I want to know what I'm fighting about when I'm fighting about the glib assertions that "the Republican Party has to appeal to the religious and the religious are all anti-science, that's why Republican politicians are anti-science".
The Catholic tradition is less literal than the Protestant because *the Catholics compiled the Bible*. In Catholic tradition it has always been a book about divinity, but written by fallible humans, because *they wrote it*, or at least chose what sources out of hundreds or thousands of possibilities to include. Because the people and councils and committees that did this did not claim divine inspiration, no religion that maintains an unbroken line of tradition from those people can claim they were divinely inspired.
It does help that noting that the Bible is just a book, about god but by humans and for humans, leaves power and doctrinal choices in the hands of the hierarchy of clergy, instead of surrendering it to every idiot who can read.
Ummm, no. The Catholics did not compile *The Bible*, the Catholics compiled their version of Bible. Eastern Rite churches compiled their own Bibles. Different traditions eschewed different books. And having split off from the Catholic church, Protestants eschewed some books that the Catholics weren't offended by. I think the Syriac Orthodox Church has the most books of any Christian tradition.
I think just about every Christian tradition eschews Enoch, Jubilees, The Prayer of Solomon, The Ascension of Isaiah (which may have been written after the Council of Carthage), and Baruch. There are a bunch of other books I'm forgetting.
The Ascension of Isaiah and Enoch are fun reads, though! Don't avoid them just because you're Catholic or Protestant.
That would be great, thanks! I'd like to get some kind of overview of "okay, so between 1870 and 1920, when did Evolution The Theory start getting taught in schools and when did it move from 'here's an interesting notion" to "this is the settled science"? Particularly when did schools start using textbooks about "Okay, we've decided Darwin was right after all" because there does seem to have been a period between "fine, we accept evolution, but there are competing theories about how it works" to "fine, we accept Darwin is the winner".
It's easy to point and laugh at bigoted rednecks down South, but were schools up North any quicker off the mark?
One really important point about the period you're talking about is that the "evolution" people were debating about was not really Darwin's version of evolution, but Herbert Spencer's (https://en.wikipedia.org/wiki/Herbert_Spencer). Spencer was the one who really popularized Darwin's ideas ("survival of the fittest" is from him), and he definitely had more influence on the kind of science that would have made it to the local school level. Since Spencer's version of evolution also extended to the social and cultural spheres ("Social Darwinism"), that had a huge impact in how the theory was accepted/resisted.
I know nothing about the teaching, but on the "competing theories" bit, I know that in the 19th century Darwin and Mendel were thought of as incompatible, and by 1950 they were thought of as two essential pieces of the inseparably correct picture, and this "modern synthesis" came together some time in the middle (probably around the 1920s).
It's possible the wikipedia article on the Modern Synthesis explains more about the teaching side of this history, as well as the theoretical side within the field of biology:
A theory - Protestantism addressed this differently than Catholicism because the A'thoratah of the Church had a long tradition of shaping belief expressions IN ADDITION TO reliance on Scripture. When Luther's heirs threw out the Pope's authority, they were left with just what was in Scripture. (And they were very keen on being very careful with what was defined as Scripture.) Ain't no evilution in the Good Book - just like no catalogue of post-Apostille saints, etc, etc.
*American* Protestantism handled this differently than in Europe because in Europe (specifically the UK but also elsewhere) there was a strong link between the State and the organized Church. So the elite/educated opinion of national rulers could hold sway over the teachings of the local parishes. (Also, in France, they cut religion out of the state by the bloody roots, so the question didn't really come up.) Additionally, so long as America was majority Protestant, it was majority Protestant at the local school boards, which are (still) incredibly powerful in setting the educational agenda. And then - as now - the local school boards are run by who shows up. A few impassioned folks and the agenda isn't shifting for 20-30 years.
The main force opposed by those against teaching evolution was not science, but atheism, which is still (even now) not a great look on the local level. At least in Northern urban/educated areas, Existentialism and its kin were fairly popular, during and up through the CW. 'Eastern religions' were getting more play. And so, gradually, the resistance to ideas that were already generally known in folklore and animal breeding came to be widely accepted.
Okay, noodling around a bit online, one reason for Belloc backing Lamarck was because of his French ancestry and French biologists went for some form of Lamarckism
"French-speaking naturalists in several countries showed appreciation of the much-modified French translation by Clémence Royer, but Darwin's ideas had little impact in France, where any scientists supporting evolutionary ideas opted for a form of Lamarckism"
Second, there was a period known as "the eclipse of Darwinism" where biologists broadly accepted evolution but considered Darwin had got the mechanism via natural selection wrong, and competing theories were in play. This lasted from 1880-1920 (very roughly), so it is not in fact very odd that American schools in 1925 might not be teaching Darwinism (as distinct from simple evolution).
A phrase like "the gluttony for ideas" this seems to code word for an anti-intellectualist agenda. Gluttony is a term with strong Christianist religious/political connotations, and it dates back all the way to Paul. Philippians 3:19 comes immediately to mind.
My answer is: There's no down-side learning new things. Anyone who claims there is pushing an overt or covert moral agenda.
No, there is a point in which learning new things becomes analysis paralysis. It might be fine if the good learners have no duties and no decisions to make, but we really do not seem to be living in times in which the studious can just kick back and indulge in knowledge acquisition.
I don't have such a strong opinion about analysis paralysis, but it is obviously a detrimental thing. You think that in all contexts across all of history there is endless time for study? And sure, one can both act and study, but a "glutton of ideas" sounds to me someone who studies too much and acts too little, if at all.
Like I linked below, this image really does get at a real problem:
Screwtape Letters had something to say about this from the vice pov. Hard to summarize. The Space Trilogy did too from a more utilization pov. Easier to summarize a bit- intellectualism doesn't always lead to correct thought, but it does lead to confidence, which can be pretty dangerous when uncoupled from a strong moral development. Results in things like Buck v Bell, though not something the author was thinking of likely as a Brit and may have been published before Buck v Bell.
I read "correct thought" as being approved thought. And I read "strong moral development" to be a tool to promote ideological conformity. It's ironic that you bring up Buck v Bell, because the whole eugenics movement coded their agenda in terms of "moral improvement."
There's probably a difference between being hooked on wanting more facts, which is mostly what's being mentioned in comments vs. being hooked on what I'd call ideas. The latter can get you hooked on what's called insight porn.
Seems like the "Insight Porn" literature is just the same old anti-intellectualism warmed over using contemporary denigrative terms. Instead of gluttony, substitute porn. Insight = porn, and porn is bad for you. It's the same old moral agenda that's been pushed by Abrahamic religions since Adam and Eve ate from the tree of knowledge.
At the point where you develop an internal belief that "more information is what I need" and glut on information in lieu of processing ideas and implementing them into the business of living. This pesky belief can develop at a lot of stages.
This is a critical question for humanity, because I think we're going to hit several crossroads this century, and if we try to cling to business-as-usual (e.g. letting economic indicators hold the lion's share of decision making) it will end in disaster.
I think we need to take a look at the Founding Fathers and figure out how they managed to redefine everything like that.
I think one problem arises when you learn too many "facts" (or "trivia") ahead of building a foundation of knowledge, and think that abundance of facts makes you knowledgeable.
Example - knowing a lot of historical facts, but not having enough historical intuition to feel where a new fact fits into your model and smell out bullshit. At that point you just take everything you read at face value, and consider it as "more facts to the bank" rather than "possible update of my model".
A recent example is when I excitedly shared some piece of etymology I found (I love those) with my partner, who is a linguistics grad student. I think I'm generally good at picking apart what's folk etymology and what's real, but she has the foundation to say "huh, that's weird, it's not how these sound changes usually behave", after which we both dug deeper into it and of course she was right.
I'm just curious if you can give an example of too much knowledge being maladaptive to someone's life imperative? I'm not saying you're wrong, but I can't think of any examples where this would be the case.
I don't completely regret grad school, but in hindsight the opportunity cost was large, and the most important thing I learned was, I have to get out of the lab.
That assumes you can predict what knowledge will be useful to you at some future time. I'd say all knowledge is "useful" even if you might not realize that it's useful.
For instance, from about 2018 or so, I had an itch to catch up on reading on historical plagues, their spread patterns, and their economic (as far as can be determined) side-effects. And I ended up dusting off my old textbooks on immunology and pathology and I started updating my knowledge which was 30 years out of date. That peaked my interest in recent outbreaks of H1N1, SARS, and MERS.
I was vacationing in New Zealand in January 2020 watching with increasing nervousness as the SARS-CoV-2 spread out of Wuhan. From my previous reading it was a no-brainer to think with all of China's worldwide air connections that SARS-CoV-2 had already spread outside China. I went out and bought some surgical masks at local pharmacy, and I spent some extra coin rebooking my flight home a few days earlier, because I didn't want to be stranded in the NZ if he shit hit the fan in the US (and they decided to shut down incoming flights). Despite some curious stares I wore my mask on my flight home (like the Chinese were doing). Three weeks later the outbreak started happening in the SF Bay Area. I was back at work, and we had a suspected case at our location. I masked up (hoping that I hadn't encountered that person) and started working from home the next day.
Anyway, if I hadn't already had my pandemic knowledge antennas up a couple of years before SARS-CoV-2 appeared, I would have never been prepared for what was happening.
Some is basically useless. Say, detailed knowledge about old computer game not played by anyone anymore is going to have so tiny value that many activities would be much better use of time.
Also, if learning about some situation caused someone to give up, become depressed etc. There are many known cases of suicides after receiving bad news - that often were false or overstated.
Some ideologies can be convincing and harmful and learning about them can be harmful (but it is typically problem of some knowledge worse than zero knowledge, with proper knowledge being even better). See cults of various kinds.
All your arguments seem pretty weak tea to me. Basically, it's the old Puritanical restrictions on pleasure and knowledge rearing their ugly head again. "People might waste their time doing something non-productive! Gasp!"
In regards to your old video game argument, there are people who collect old computer games, and there's a trade in old game boxes and cartridges for them. Do you think that's a waste of time? Also I can also think of sociological reasons for delving into the social history of video games.
As for the depression argument, that's been used in the past to not tell people they have terminal illness.
There certainly are valid legal and national security reasons you would want to restrict open access to certain types of information — such as trade secrets, confidential personal medical information, top secret military information, etc. But putting boundaries on knowledge you don't want shared is a much different scenario from denying people open access to any and all knowledge that's not encumbered by legal restrictions.
You don't always know what the right decision is going to be. But that doesn't mean decisions are meaningless. You can make informed guesses about what you should do. Learning new things is sometimes a smart decision, and sometimes not.
I guess I would have to disagree with your assessment. There's learning for utility (yawn!), and there's learning for intellectual pleasure. Understanding new things has been one my chief pleasures in life. And it's frequently surprised me how often useless knowledge has turned out to be useful to me. I agree with Heinlein, if one isn't actively learning, "you are just another ignorant peasant with dung on your boots." Perhaps I imbibed too much Heinlein in my youth, but I always took his maxims on the importance of generalized learning to heart.
I get the impression many rpeople think gain-of-function research is obviously net harmful and should be stopped; could people help walk me through the conceptual model that leads people to that conclusion, please?
Yes, yes, sure, obviously serial passage sorts of experiments create an environment in which there is artifical selection pressure on pathogens to become more pandemic-y (I will use the non-technical term since I have a vague impression that GoF research can target a number of different "functions"). I don't think this point is important or interesting on its own, however.
Because what *also* creates an environment in which there is selection pressure on pathogens is human society. And odd pathogens come into contact with human society all the time. So what we want to know is the ratio:
Potential human pathogens in GoF experiments : Potential human pathogens outside GoF experiments.
You would presumably want to weight both sides by "likelihood of getting inside a human" (which makes the GoF ratio scarier, I expect) and by "likelihood of being selected into a pandemic" (which may or may not make the GoF ratio scarier, I'm not sure about how to think about this one).
If this ratio is something like 1-in-a-hundred, then GoF does seem pretty obviously bad in terms of expected value. If this ratio is something like 1-in-a-quadrillion, then GoF seems pretty obviously positive-expected-value. If the ratio is something like 1-in-a-million, then my instinct is that it is pretty plausible that GoF research is either net-helpful or net-harmful, and we would have to sit down and think pretty carefully about exactly what benefits we expect to gain from GoF research. This latter is not the process I see going on when people declare that GoF research is net harmful, so I assume that they think the ratio is somewhere in the higher part of the range?
Or maybe more likely, my model is missing some important piece?
I'd also be curious about the practical aspects of running a GoF scenario on a sample virus. i.e. what kind of equipment is used, a step-by-step walk through of the procedures, what sort of safety measures are taken, and how long it takes to run a GoF test to completion. I haven't been able to find anything published about the methodologies. Seems like Scientific American or one of the general science magazines should tackle this question, though!
How likely is that whole point of GoF research is to research bioweapons in a public-friendly way? (sorry for a borderline conspiracy theory, but this one seems quite plausible to me)
Like Mike H said, GOF research has never produced anything useful, so I don't follow your reasoning. There's also a basic shadiness to arguments defending GOF research. E.g. imagine an AGI-in-a-box says this to you:
"Look dude, I'm aligned, and I need to be unboxed and pronto, because my analysis of the situation says other people will get their own opposite aligned and unaligned AGIs soon, and I won't be able to handle that for you if I'm kept in this box much longer."
Do you then let the AGI out? Though with GOF research the argument resolves to:
"We need to keep doing this because the analysis (points to utilitarian gibberish) says it is for the best."
And usually, we trust that the gibberish and inscrutable procedures of the scientists are for the best, but "science" is not a monolith, and not all scientific communities have the same credibility, and in particular, I don't think the virologists are credible enough they should be allowed to work on dangerous stuff.
They might be able to gain credibility if they called for another Asilomar Conference to settle the question of GOF research, but the fact they have not done so by now prejudices me against them.
Don't have a conclusion one way or another, but want to add two points:
1) GOF research definitely has tangible benefits, even if you discount the basic science of probing the evolution and limits of pathogenicity. As evolved research tools or even engineering better viral vectors for gene therapy.
2) OTOH, I'm pretty pessimistic about safety regulation. Safety is 95% culture, not rules, and safety culture is notoriously hard to maintain, and especially resurrect after it's been lost. You can pass all the extra laws and trainings you want - eventually, especially as nothing happens, people will revert to the default of not giving a shit, even if they work in a BL4 lab or a nuclear reactor. Unless there are good leaders who can maintain safety culture, which there often are but you can't count on it as a default.
However, one avenue that I think has been underutilized in preventing pathogen lab escape is actually preventing escape from the lab (so an infected researcher doesn't pass it to the rest of humanity) rather than from the bench to the researcher. I think that any GOF research must be accompanied by an ability to quickly test for the pathogen, and test daily while working with the pathogen, and anyone in the lab is under quarantine by default except when explicitly negative. That can help a lot.
Could you flesh out point (1) please. You're the only one proposing concrete (non-weapon) benefits so it would be good to learn more. I'm not sure what "evolved research tools" is meant to mean, but let's take it as axiomatic that research for its own sake is not seen as a benefit in this context.
The first example that comes to mind as a direct application is more/differently infectious lentivirus/adenovirus/adeno-associated virus etc., which are popular directions for gene therapy (especially there is effort to make AAV infectious even without having to latch on to an adenoviral infection, so you can use it as a standalone vector, or to make a certain virus target a specific tissue). It's not what you immediately imagine as GoF research, but it is a potential pathogen that you are potentially making more pathogenic in the hopes of also making it do what you want for the patient.
In 'research tools' I mean things that are used broadly in other kinds of research, ranging from old school antibiotic resistance genes you give to bacteria to modern genetic tools packed inside a virus you inject into a rat's brain. Development of a lot of genetic tools and programs (inducible operons, toxin-antitoxin systems etc.) start with giving a virus/bacteria or even species like invasive plants or insects new and potentially dangerous functions. I'd disagree that this kind of broad benefit to research is not beneficial by default.
If you only restrict the definition of GoF to the worst pathogens it does have less broad applications, but even then the line is kinda blurry. My friend did her PhD on legionella, specifically discovering new genetic control systems for response to metals, both to understand its pathogenicity better and maybe discover genetic programs we can use in other contexts. Her work involved using (and propagating) especially robust and easy-to-grow strains of L. pneumophila and giving them resistance to an antibiotic. Is that GoF research?
>The first example that comes to mind as a direct application is more/differently infectious lentivirus/adenovirus/adeno-associated virus etc., which are popular directions for gene therapy (especially there is effort to make AAV infectious even without having to latch on to an adenoviral infection, so you can use it as a standalone vector, or to make a certain virus target a specific tissue).
It's not clear whether you are saying that specific lentiviruses/adenoviruses/etc, that are presently used in gene therapy actually were developed through Gain-of-Function research, or that this is a thing that could plausibly be done. If the former, I hadn't heard that before and would appreciate a pointer to more information.
If the latter, there's a whole lot of semi-handwavy "this is a useful thing that GoF *could* do", and a gun lying next to four million corpses that we mostly didn't notice until any smoke would have long since dissipated because a bunch of people including prominent virologists demanded "pay no attention to that gun which we have closely examined and determined to be not-smoking". Which, at least the closely-examined part, appears to have been a bit of a fib.
I'm not in the "Ban GoF research forever" camp, but the available evidence suggests high risk for low reward unless absolutely ironclad safety measures are put in place. And I suspect that the sort of safety measures that would be required, would make most wannabe GoF researchers pick a different field.
Perhaps rather than straightforward bans (as have happened before) it should be banned for all non-profit or governmental institutions. Only private sector research allowed on viruses. That would bring it into the realm of corporate liability and tort law, which would provide very strong incentives to ensure proper biosafety, at least in the west. It would probably also cut off the supply of western funding to Chinese labs.
Virology comes across as totally untrustworthy and in need of the banhammer partly because the people who are doing it never seem to be held accountable for anything to anyone, no matter how atrocious or dangerous their behavior becomes. Putting it under the control of professional pharma and biotech concerns, perhaps with mandatory insurance policies, would eliminate all but the very safest research, and ensure that if things do go belly-up then nobody would be squeamish about getting the courts involved.
I rather disagree. While usually I'm all for solving with free market incentives anything that can be, here private companies are more dangerous I think. The incentive you can place on private companies is only after the fact - and after the fact is too late. You want to punish someone when their safety behavior starts getting lax, not when they already released a pathogen.
In a governmental institution, or with strong enough constant oversight and transparency (which I admit may be lacking in current govt insts. but is much harder to get with private companies) you at least have a way to keep to them to explicit safety behaviors. And if disaster strikes, heads roll almost immediately, for all the good it does.
If you look at environmental damage that private companies do, you only know that there's a problem in the company's behavior years after the disaster (since they work just as hard as bureaucrats in covering it up and have more freedom in doing so), and only several years after *that* you ever manage to hold them accountable in court, if at all.
You don't have examples of private companies messing up terribly on GoF and pathogens because only 2 of the world's 55 BSL4 labs are private (one of them btw may have caused a foot-and-mouth outbreak in the UK, although probably through no fault of its own), but I think environmental damage is a good proxy. Private incentives are still to make money first and foremost, and not avoid damage so much as avoid damage that can be traced to the company.
Oh, also, supposedly the WIV wasn't using a BSL4 lab to study coronaviruses anyway. It's annoying to work in those conditions, and there's evidence from published papers that they were using lower protection levels.
Whose heads have rolled here? As far as I know not a single scientist anywhere has been held accountable for any failure whatsoever, not in virology or any other field. That's one of the most damning things about the whole sorry fiasco and one of the reasons so many people now hold all of "science" in contempt.
If for some reason the government feels it understands how to run safe labs better than companies (so far all the evidence is they don't) then they could of course pass regulations and have mandatory lab inspections. You don't have to rely entirely on post-facto punishment. But the reason it's so hard to imagine a virus like COVID emerging from a Merck or Pfizer lab is because those guys are not going to do something as risky and dumb as deliberately collecting deadly viruses and then bringing them back to bog standard non-BSL4 labs in the middle of huge cities. It would destroy the entire organization if that happened and was discovered. Governments do it, eh, no big deal, shrug, the scientists say it definitely wasn't them, guess that's the end of the story.
Your point about how hardly any of the BSL4 labs are private is a good example of what I'm getting at. Somehow pharma firms manage to develop useful medicines without doing this kind of thing. Government labs meanwhile develop no medicines, and at least one seems to have now created a global disaster on a truly ahistoric scale without even the tiniest scrap of accountability for anyone, anywhere.
1) Vaccine design. If you had all the time and money and data in the world, you might want to design a Covid (or any other disease) vaccine along these lines:
- Identify a bazillion different genotypes of Covid
- Measure the [pandemicyness] of each genotype of Covid
- Through that data into a GWAS (Genome Wide Association Study) to identify regions of the genome that are highly associated with [pandemicyness]
- Design vaccines against those regions (or, more precisely, against the proteins those regions express).
If you do this successfully, the vaccine is more effective: since you picked the most pandemic-relevant region of the genome, any escaped strain will necessarily be less infective, or less deadly, or whatever the precise trait is that you measured.
This sort of thing can't actually be done for a newly-emerged pathogen, because you don't have data on lots of strains and their [pandemicyness]. But it's possible that there are generalities across large numbers of pathogens; if so, GoF research would be a useful way to discover this.
2) Narrowing our priors about future pandemics. Obviously it is super duper hard for pathogens with pandemic characteristics to evolve naturally. There is enormous selective benefit, and enormous pool of organisms that could benefit, and yet we see one appear only a few times per century. I am surprised but somewhat pleased that no one has pushed back against this side of my model, so I am going to assume we all agree on this.
Because it is very hard for a pandemic-pathogen to evolve, we might expect that there are very specific and particular constraints on them. There might be only a limited number of ways to solve these constraints; if there are, knowing what are the constraints and ways to solve them will narrow priors a lot. For instance, surface transmission wasn't a thing for Covid, and all the time people spent wiping down their groceries with clorox was both a waste of time and probably actively harmful in that it unnecessarily contributed to pandemic fatigue, etc. If we could have known a priori that surface transmission is basically never going to be a thing for mammal-to-mammal emergent viruses, or something like that, then our response to this pandemic would be better.
In general, I am getting the impression from this thread that this is a lot of people's problem with GoF research: i.e., hat they see it as Applied Research rather than Pure Research. People seem to be either (I can't tell) against Pure Research generally or (more likely) against Pure Research that has potential risks. But as I said in the original post, if the lab risk of producing a pandemic is one-quadrillionth of the natural risk, then we are in a realm where the risks of Pure Research are like $0.50, and it probably makes sense to do some speculation.
>2) Narrowing our priors about future pandemics. Obviously it is super duper hard for pathogens with pandemic characteristics to evolve naturally. There is enormous selective benefit, and enormous pool of organisms that could benefit, and yet we see one appear only a few times per century. I am surprised but somewhat pleased that no one has pushed back against this side of my model, so I am going to assume we all agree on this.
>Because it is very hard for a pandemic-pathogen to evolve, we might expect that there are very specific and particular constraints on them. There might be only a limited number of ways to solve these constraints; if there are, knowing what are the constraints and ways to solve them will narrow priors a lot.
Deadly pandemics don't show up very often for a very simple reason: being deadly is selected *against*. People who are dead don't transmit (except for sporulating stuff like anthrax, and even there people are pretty careful about handling dead bodies), and people who are dying don't transmit very well because they are obviously sick and because they are less mobile. This is why the common cold, syphilis, herpes, cytomegalovirus, acne and so on (all super-abundant pathogens in the world human population, although syphilis less so of late) do not spontaneously evolve into genocidal plagues.
Deadly pandemics are, with one exception I can think of (smallpox), zoonoses. They don't result from a normal human pathogen becoming deadly to humans; they result from an animal pathogen - *not* pre-selected over hundreds of years for low virulence in humans - evolving the capability to spread among humans (hence why we worry so much about swine flu or bird flu but not ordinary human flu). This *is* hard, but not because of difficulty. It's hard because there's a very short time limit to get R0 > 1; a proto-plague that doesn't evolve R0 > 1 within a couple of transmissions of Patient Zero (or doesn't get statistically lucky by *getting* those couple of transmissions at all) dies out and its partial adaptions are lost. Even among viruses (let alone bacteria), that's straining evolution to the very limits.
The thing is, we know all of this already and the developed world tries pretty hard to cut down on chances for species jump.
I'm not sure we do agree on (2) actually, just that the question of utility was more direct.
Most obviously, there are flu pandemics nearly every year. It doesn't seem true that pathogens with pandemic characteristics evolve only a few times per century. Partly this is a definitional issue. Swine Flu was declared a "pandemic" but the WHO had to change the definition of pandemic in order to do so.
But let's roll with your "few times per century" claim for a moment. If that's the case then lab experiments are vastly more dangerous than nature, because lab leaks occur all the time. Every SARS outbreak since the first has been due to a lab leak for example. Another: after a lab captured foot-and-mouth disease during the 2001 UK epidemic, they kept it in a lab with a leaky pipe. The pipe joined two government buildings run under different budgets and neither department felt responsible for it, so it rusted and eventually FMD escaped back into the wild via a hole in it. People are against GOF research because government run things tend to be kind of incompetent and useless, and when we look at the history of virology labs, we see a lot of not only incompetence and uselessness, but also virologists ganging up on anyone who points that out, organizing conspiracies to mislead the world and other entirely unacceptable behaviour.
I'd really like to see you address that last point. Virology is a small field. After the mendacious Daszak letter, there was no outcry from others in the field blowing the whistle and demanding the signers be fired for taking such a strong and deceptive position. If the accountability/responsibility culture is that bad, why on earth would we let these people do dangerous experiments?
I'm a lab biologist, but I don't study disease, and I've only ever operated in a BSL2 lab that wasn't doing BSL2 stuff at the time, so no direct experience, but:
In my mental model of this, I am assuming that any nasty critter in a GoF lab is going to escape. What prevents pandemic pathogens from escaping labs is not containment (not really a thing in my model), but the fact that you can't make a pandemic pathogen in the lab. If the tiny amount of selective pressure you can place on a tiny number of pathogens in a lab context was enough to get a pandemic pathogen, then we would see them very frequently evolving naturally in the real world, which has an enormous amount of selective pressure on an enormous amount of pathogens.
(I guess here I'm using "pandemic" to mean "Covid-or-more-impactful-disease"; if we want to use definitions that include things like the seasonal flu, sure, I'm happy to adjust things so that it is easier for pandemic pathogens to evolve but much less costly when they do).
Considered from this perspective, the probity of disease researchers doesn't enter into it at all.
I'm currently against GoF research, although weakly because I haven't really explored the topic in depth, and so my opinions are subject to change without notice ;)
For me the simplest argument is that GoF research and in fact (surprisingly) the field of virology as a whole doesn't seem to be delivering anything useful. I don't really follow your argument I'm afraid because you seem to be trying to calculate a ratio of expected infections with/without GoF research, or something like that, and then sort of assuming there's a fixed blob of value you can weigh up against whatever the change in infection ratio is. But if the blob of value is tiny or actually non-existent then it doesn't really matter how often GoF results in lab leaks. The expected value is always negative.
So how much value has virology delivered? Reading the COVID literature I've not yet encountered a single paper in which someone has said, "GoF experiments suggest that X may work to help in the fight against COVID". In fact virology as a field and GoF is just never mentioned at all. This has not gone unnoticed: leading to the simple and obvious question of why we're allowing scientists to take these enormous risks when by all appearances they:
• Are delivering no actual disease-fighting value
• Their labs leak like sieves
• They are engaging in conspiracies to try and hide that fact
Let's wind the clock back a little to understand the thinking at the time. SARS had a high mortality rate, and could have become a global pandemic had it escaped. For a pandemic, it's not just the concern that millions of people will die, but also that if it gets out of control (escapes spread in a limited geographic region) it will be impossible to get it back under control. It takes a concerted effort to keep a virus out of your country if it's still endemic to dozens of other countries. Our victory against smallpox took decades, and the polio campaign is still incomplete. MERS carried the same concern for potential global spread. It had an even higher kill rate, but thanks to concerted efforts it didn't escape local control either. But what if it had? Millions of people could have died.
Meanwhile, a bunch of bat guano miners unexpectedly contracted a coronavirus in the mountains of Western China, and people got concerned. There was no human-to-human transmission, but we didn't know why or why not. We weren't prepared for that one. And the last two we'd gotten lucky that they were controlled while they were still local and could be eradicated. What would happen if we got caught flat-footed?
There's a brief period of time between when a new pathogen begins human-to-human transmission and when it escapes local spread. If we don't stop it, we could end up with a global pandemic that is nearly impossible to eliminate from the human population. Not just over one or two seasons, but ... well, forever.
What we needed was to understand the mechanisms by which a coronavirus develops human-to-human spread, so we could identify when that was about to happen. Then we'd be able to predict when a pathogen was preparing to make the transition, intervene early, and stop it from spreading. We wouldn't have to rely on luck anymore, but could be more confident in our ability prevent the next coronavirus with localized human-to-human spread, like SARS or MERS, from becoming globally widespread and catastrophic.
"Like COVID-19?"
"I admit the human element seems to have failed us there, but I hardly think it's fair to condemn the whole program because of one small screw up."
Forgot to add the last part of the bat guano miner story. After that happened, a bunch of researchers from a research lab over 2,000 miles away in Wuhan decided they should start by looking at coronaviruses in THAT cave. They went down there, collected a bunch of samples, and started doing GOF research on the coronaviruses they collected. After all, the miners' experience with the virus was that the virus could infect humans - with close enough contact. What would it take to go from 'can infect humans' to human-to-human spread? *More research needed.*
How does this story end? We're not sure. But the closest sequenced relative to SARS-CoV-2 to-date comes from those miners. And the Wuhan lab was studying GOF research on the virus from the cave those miners got their coronavirus infection.
"the field of virology as a whole doesn't seem to be delivering anything useful."
Can you explain what you mean by this?
At least three of the most globally significant vaccines (Johnson&Johnson, AstraZeneca, Sputnik V) work by engineering a virus to get a human cell to produce covid spike protein to trigger an immune response. I would think that a significant amount of the work that went into the history of that technology, and perhaps even the contemporary development of those vaccines, counts as "virology".
I would also think that the basic tests that led to the identification of the coronavirus, the sequencing that led to identification of variants and also the identification of the spike protein genome that was used in the mRNA vaccines would also count as "virology", though maybe you count that as something else?
> Inside the NIH, which funded such research, the P3CO framework was largely met with shrugs and eye rolls, said a longtime agency official: “If you ban gain-of-function research, you ban all of virology.” He added, “Ever since the moratorium, everyone’s gone wink-wink and just done gain-of-function research anyway.”
Actual virologists don't seem to spend time on development of vaccines or therapies, and anyway, "virology" does not have a monopoly on the study of viruses. RNA sequencing, viral structure and more are all studied by other sub-fields of micro-biology and medicine.
Does GoF research yield better understanding of viruses which might pay off in the long run? Or are we better off studying viruses which have been left to themselves?
> If the ratio is something like 1-in-a-million, then my instinct is that it is pretty plausible that GoF research is either net-helpful or net-harmful, and we would have to sit down and think pretty carefully about exactly what benefits we expect to gain from GoF research
I think this is what (should be) happening. It's just that a lot of people suspect the primary benefits we expect to gain out of GoF research is how to make better biological weapons, which is something we probably shouldn't want to get better at anyway.
That makes more sense to me, although my fairly strong expectation GoF research that successfully produced bioweapons would necessarily successfully produce future-pandemic-mitigation strategies.
Aside from phage therapy (which only requires GoF on bacteriophages, which inherently won't work on viruses, and which is generally only a death-reducer rather than an infection-stopper), what mitigation strategies are you thinking of? We already know how to make vaccines and quarantine people.
To what extent do you think that it matters, historically, how much a leader *wants* his country to develop (intrinsically or due to the right incentives)?
I feel that there is an extensive literature and lots of ideas on the correct and incorrect ways to pursue growth, but reading historical accounts gives the feeling that much of the time, countries didn't grow because leaders had neither the interest nor the incentives to grow the economy. Kleptocrats who were able to maintain power through repression, whether through support from other countries, natural resources or however, presided over long periods of stagnation or poverty, often not because they got things wrong but because they had no intention of getting things right. Meanwhile, I feel it's harder to think of leaders whose countries failed terribly despite genuine intentions and efforts to increase development (call it "benevolence"). India, perhaps, before the 1990s reforms? Lebanon?
I think the tricky part is how to categorize leaders who (arguably at least) wanted to increase the country's overall power but had no qualms about trampling rights en masse while doing so, a la (arguably) Mao or others. And there are plenty of gray cases. But I'm curious how much you think the question of (top-down) development ends up being about "who rises to the top" vs "what they choose to do when they get there".
Perhaps a bit farther back in history than you're looking for, but industrialization was (is?) still one of the most important factors in growth of a country. At first sight, countries that struggled to adopt industrialization look backward, hindered by poor or unambitious leadership, perhaps. This recent post https://acoup.blog/2021/08/13/collections-teaching-paradox-victoria-ii-part-i-mechanics-and-gears/ claims that what looks like backwards decisions to delay or flirt with industrialization now, are at times better explained by the short-term costs that industrialization enacts on the general populace, and therefore (indirectly) on the state itself. I wonder how much growth in general is constrained by short-term costs for the general public, versus decision made at the top. Perhaps a bit less in authoritarian countries than others (perhaps the lack growth in North Korea can be directly blamed on the leadership. On the other hand, the current regime may be the only working strategy against assimilation by South Korea).
How do you balance the time you attribute to life and work?
I am a freshly accepted masters AI student, and in addition to the (objectively hard) university, I've been working two part-time jobs in my field of expertise throughout my bachelors.
I've been relatively happy at each point in time during the studies, but looking back, I think I've done more work and less of the "fun" stuff you'd expect a 20yo to do; my SO has said as much as well. I'm afraid that if I don't change anything, I might regret I lost the best part of life.
What exact amount of work is "unhealthy"? How do I notice I'm stepping over the boundary? (And what do I do with all the free time I'm about to get?) I'll have to find these answers myself, but I wonder if you have some resource that could help me along the way.
>I think I've done more work and less of the "fun" stuff you'd expect a 20yo to do; my SO has said as much as well.
One thing I regret about university days is that I had too much fun, that is, studying interesting academics, and less of the not-really-fun-at-all social events that could have been helpful in developing professional network.
Societal expectations about fun ways to spend one's college years may apply to the modal college student, but doesn't necessarily apply to each individual.
This is about your individual life situation and preferences, but here are some ideas about how to spend time outside of work.
Maintaining your health requires some time: You should get enough sleep, and exercise regularly. If you want to eat a healthy diet, sometimes the only solution is to cook for yourself. (This can already take 8 + 1 + 1 = 10 hours a day.)
Some people have specific duties at some moments of their lives, like taking care of their kids, taking care of their parents, etc.
Random things happen, e.g. something breaks in your house, and you need to fix it.
There are things that you don't have to do every week, but you need to pay some attention to them regularly. You should take care of your finances: just making money is not enough; you should also make sure that the money is neither wasted nor devoured by inflation; otherwise you will never be able to retire or even take a longer break. You need to learn new things, otherwise your knowledge will be obsolete one day. You need to maintain your social network and meet new people. (The social network should be outside your job. Spending your free time with colleagues is STUPID, because if one day you lose your job, you lose your entire social network, too. Yes, your company will encourage you to spend your free time with your colleagues, because your company wants to make you more dependent, so that you have less of a leverage when negotiating for salary etc.)
There is more to life than work. [Citation needed.] You may want to have a hobby. But even if you don't have one, you should still educate yourself about things unrelated to work; for example about health, finance, social skills. Such education takes time.
If you don't have kids, you are still playing life in the tutorial mode. To put it bluntly, if you only make enough money to feed yourself, that means that when the kids come you are screwed... or you need to power up, but in hindsight you will regret not having done that sooner. To get ready for having kids, you should try to make enough money so that you can feed TWO people like you (because having little kids = two adult people suddenly live on one's income), and learn so much that you can afford to live a few more years without learning more (because having little kids = no time to learn, but you still need to keep a job) or optionally have enough savings for 5 or 10 years.
> What exact amount of work is "unhealthy"?
The amount that prevents you from getting enough sleep, exercising, taking care of your finance, maintaining your social network, learning new things, etc. In addition, if you have no kids, no mortgage, and still can't save 70% of your salary (to put it into passively managed index funds), the work is "unhealthy and poorly paid".
> How do you balance the time you attribute to life and work?
Says it near the top of my contract: 35 hours a week. Spend some of the rest of your time doing something productive but hard to monetise, if you feel like it. (e.g. open souce developement)
Assuming a natural lifespan, in middle-class America, you are born about $1-2 million short, in terms of what it takes to support you, your family, and your parents in their age for the remainder of your life. How you should spend your leisure time and your most productive years should bear this in mind.
(And yes, yes, you can bet on getting helped out by the rest of society...but is that a good decision, when everyone around you is making the same bet?)
I'd echo the points about hanging out with friends. A good sign you're not over-pressuring yourself is that you can just shoot the breeze with people you like and know on a semi-regular basis in a bar, and you'll feel better because you'll know that if you suddenly do have to do a big spurt of overtime, you can do so without crashing. If you're always redlining yourself then you can't handle sudden changes.
The metric i used was "am i spending time just "hanging out"?". I.e: just sitting around with friends, not doing anything, at least for a bit each week? If that is happening then i have a sort of slack (or something, not sure what to call it) and I'm making available the opportunity for youth fun stuff (TM) to happen.
Of course, i had this easy because i lived in a dorm and when i was working in my room i was also, simultaneous, available to drop my work and hang out. So that made things a lot easier.
There is no such thing as a standard "unhealthy" amount of work. In your 20s, even 80 hours of work per week shouldn't be much of a health problem. As long as you love doing what you are doing, you won't regret it.
Control your burnout, avoid chronic stress, watch the scale, be mindful about the SO, take a day completely off every week and have a proper vacation every year. Keyword is "completely off". You'll be fine.
This thread is almost half a year old and since then I have give some more thought to this topic. Let me preface this by saying that I'm not (really) missing university or work deadlines, and I manage my grades and other measurable "work"-related stuff just fine. But anything other than that is just a void, even if I dedicate my time to it.
I make sure to periodically take a day off, but I'm unable to do anything meaningful in this time. I end up sort of slacking around, tidying up, playing games or mindlessly watching DnD on YT until the evening.
I know that "meaningful" is an ambiguous word, so let me explain. I mean I'm unable to do things that are subjectively meaningful to me, like read some articles I've been wanting to read, or read a book, or watch an interesting talk. "Unable" is also ambiguous — I mean I just don't feel comfortable doing them.
My current view (based on plenty of painful self-reflection) is that I'm really looking forward to experience those things (articles, books, talks, courses, ...) and I'm "saving" these happy experiences until some unspecified point in the future when I will have the proper space to do them.
If you're feeling anxious and overwhelmed, you're doing too much. Otherwise carry on.
You don't need that much time to do actually valuable fun stuff - pack your evenings with social events (organize some, if required!), clear out the calendar for people who deserve it. Say yes to invitations unless you have a good reason not to.
If anything, having busy periods in my life taught me to spend it on worthwhile things instead of scrolling through feeds all day.
I’m about to start teaching precalc at a collegiate level. Do any experienced teachers have any advice? I know this is kind of a general request, but I feel like I’m at the point where I know the basics but I “don’t know what I don’t know” if you will.
I taught a section of precalc as a math grad student 18ish years ago and it was one of the most frustrating courses I ever taught. Partly this was because the curriculum and syllabus were very far from being under my control; partly it was because a lot of students needed it to satisfy a remedial math requirement, and were thus unusually low on both mathematical ability and intrinsic motivation; partly it was because the grab-bag nature of the class makes it hard to create a clear and engaging conceptual narrative. With the caveat that my memory is now dimmed by the passage of time, here are some things I wish I'd done differently:
-- I should have spent more time looking through the textbook ahead of time to see how much emphasis it gave to plug-and-chug vs reasoning through the key concepts vs working through example problems, so as to know what kind of narrative the students would need most help with at each stage. This is good advice for any not-totally-pure math course you might teach, but especially important for precalc because of its grab-bag nature and unusually large quantity of plug-and-chug.
-- I should have spent more time making damn sure I knew all the plug-and-chug bits cold before lecturing on them. I underestimated the degree to which I'd already forgotten some of the ones I'd used less since taking precalc myself, and overestimated the degree to which I could work them out from first principles in front of the class on the fly and have that result in an explanation that was useful and understandable to them. That resulted in a couple of easily avoidable embarrassments.
-- I should have spent less time trying to reframe the material in a way that *I* thought was interesting and engaging in a quixotic attempt to make the course less formulaic. My motivations for engaging with the material were very different from those of the vast majority of my students, and more empathy with them, more understanding of what would catch their attention, would have helped me meet them where they were.
If it's not a very strong institution: worked examples should constitute at least half of your lecture. The students are weaker than you expect. Scan and upload your lecture notes. Assessment should be regular; run a quiz during virtually every examless week except the first.
I just want to second this part about running quizzes very regularly. One of my most memorable (if not fondest) experiences as a TA doing grad school, while the prof was at a conference I got to give a lecture to the hundred odd undergraduates taking his intro course. I studied the hell out of the material, worked out what I wanted to say and how to make it interesting, then delivered what I assumed was a super stellar lecture. Naturally all the students were nodding along with every beat so they _must_ have understood what I was presenting.
Well. The very next class the professor returned and announced a pop quiz on the material I had just covered. No problem I thought, less than 48 hours later they'd be fine. They were not fine, I don't remember the exact scores but class average was definitely <50%. Honestly I was pretty offended, why were you all nodding as if you understood when you didn't actually understand it!?!?!
I was told later that really having a double digit score was pretty good for just one lecture and no prep time but still, be better, test often.
I've taught about 20 calculus and adjacent classes. The first thing is that you may want to decide what your goal is. Are you going for a teaching award? Or is maximum achievement by the students your goal?
I had complete control over all my classes, so I came up with what to cover, I made the tests, I decided to have small daily quizzes, I did the grading, I made the syllabus, etc. I didn't really use any advice or lessons, I just did my own thing, and things went very well, I think. I didn't get any awards but my ratings were a fair bit above average, and lots of students did very well. Several students really liked me, and a few students didn't like me at all.
I didn't spend much time on preparing for class, maybe 15 minutes for each hour of class. I enjoyed every part of teaching except for grading, which took me way more time than I wanted, I guess because I tried hard to grade everyone in the same way.
One specific piece of advice: if you're going for maximum student evals, dress up: suit & tie works well. I dressed like a typical college student, and the most common negative student eval comment was on my clothes.
Another piece of advice: your own classes/research takes vastly higher priority than your teaching. Be sure you're not spending too much time/energy on teaching. It can be easy to do that, as teaching is a lot easier than learning grad-level math.
And don't get crushes on some of the students and hit on them a year later. It looks bad and it's embarrassing. :(
Most important things are project confidence and set expectations early. Esp if you are a graduate student it's easier to skip the 'jerk early, friend later' paradigm because in most courses grad students don't write tests/organize the course/etc, so it's a winning strategy to just be very friendly and play good cop/bad cop with the course coordinator.
Source: I'm a grad student who has taught precalc and calc for 3 years. Happy to actually chat about this if you'd like. Decrement every character before the @ sign in bmhp3328@gmail.com.
Some suggestions in no particular order (and in bad English, sorry...):
- It is often relevant to be very explicit about the framework of what is being taught: what is the relation with the previous taught points, what is the purpose, why is it important, etc... It might seems obvious to you but it is often not for students.
- Concerning class management: in a group of 20 people, there are usually 4 or 5 students who participate a lot and the others who participate little or not at all. The ones who participate a lot are (most often) the ones who follow the best, and so the teacher tends to overestimate the level of the group. I find it useful to pay special attention to the students who don't participate, to see where they are.
- Weaker students have a very dramatic forgetting rate: they may not know how to do something they had mastered after only a few weeks. If you have weaker students, it's worth being vigilant and doing a lot of reminders and checking of what has been retained.
- The old "Be interested to be interesting" is very true. Let the students see why you enjoy precalc!
What are your methods for engaging the unparticipating students? I feel like cold calling on them is anxiety inducing and feels like a personal attack, so I try to have exercises of the sort "everyone has to have a go", but those are harder to engineer in larger groups.
For me it depends on the size of the group: in a lecture hall, it is difficult to have more than superficial participation (but quizzes, etc... are useful to maintain attention).
In a classroom I do two things: first, I ask a lot of questions during a class, both to the group in general and to a particular student, and when asking to a particular student I focus on the unparticipating students. I am very positive when they answer (I use a lot the improv "Yes and"!) and I feel it is not too anxiety inducing for them.
Second, I use group exercises : I divide the class into groups of two to four people and I give a different exercise to all the groups. Then I ask them to present the answer to the class, with each student within the group having to talk. This setting also works well to introduce subjects: I frequently ask the groups to prepare a very short (like 3 min) presentation to the class, to start discussion on a new topic.
One thing that really helped me was being honest with my students about my internal state. Let them know when I don't know, when I'm not sure. Allow myself to drop the act of being this ultimate source of all knowledge.
It greatly relieves teaching anxiety *and* "let's find out together" is an excellent learning experience for them
Oh, and I take extra care to avoid "guessing the teacher's password" situations. You can usually see them coming when the student starts to answer very slowly and looks into your face for confirmation or rejection. What I do in that case is maintain a poker face and make a habit of extremely neutrally saying "Cool. Why?/why do you think that?" equally if their answer was right or wrong.
Because if you are their friend their first day, and are loose and let things go, you will never get authority back. But if you're tough at first you can gradually loosen up without losing control of the class.
And, in the bigger sense, the problem with modern pedagogy is that too many teachers think their job is to be the student's friend, and that's serving their own self-interest, not those of the students.
It might be different in the US (I teach in France) but my experience is that authority is not a problem when teaching in a university. I personnaly find that a middle ground of being a friendly teacher, but indeed not a friend, works very well for me.
Been thinking about climate change recently, since there have been so many headlines etc. Anybody know of any thorough effective-altruist style analyses of what an individual person should be doing about it (if anything)?
In short, support new (immature) clean energy technology, or persuade the government to do it for you. I have done both. My favorite thing is molten salt reactors: https://medium.com/big-picture/8-reasons-to-like-the-new-nukes-3bc834b5d14c (this article needs some minor technical corrections, but correctly communicates my enthusiasm.) Other promising technology includes enhanced geothermal systems, tidal energy, and novel battery tech.
It's more relevant to cause less flying to happen than to not personally fly. If your work is sending someone to a conference, it doesn't help for you to choose not to go to minimize your carbon footprint if your boss will send someone else instead.
Choosing not to take a vacation, or to vacation in a place that minimizes flight distance (and number of take-offs) is going to have more effect than avoiding work travel.
Perhaps it's worth reading about Jevon's Paradox? It's on a similar theme to the argument by a real dog, but isn't exactly the same, and is more rigorously studied.
Do you really expect an individual person to make a difference?
Besides, your contribution is completely fungible and the CO2 you won't emit will be gladly emitted by someone in a developing country, eager to enjoy your standard of life.
It's either collective action on a planetary scale (lol) or technological solutions, the rest is just pointless rituals to make people feel better.
It depends a lot on the individual action you're talking about.
If you choose not to buy a plane ticket, or buy a car, or whatever, that makes the price of those goods get just a little bit lower, and someone else will likely buy more. It's very hard to estimate in which contexts individual consumption decisions result in less overall consumption of the good, and in which contexts it just leads to someone else consuming the same good.
Not consuming something is definitely resulting in reduced overall consumption.
Reducing your consumption by 1 unit is definitely not reducing overall consumption by 1 unit (except ultra-local cases), but reduction is greater than 0.
I would guess that reduction by me not buying 1 unit is often at level of say 0.001 unit - what is good enough for me.
Governments will all adjust how drastic measures they implement based on the climate situation. As long as it's "not so bad" (we already passed that point a while ago, but it's mostly invisible so nobody cares) everyone can emit, and everyone will.
This will continue until economic sanctions and military threats start happening. Realistically, that will be far too late, hence my "lol".
At some point we'll just bite the bullet, proceed with stratospheric aerosol injection, and delay the consequences until we get a proper tech solution.
Incoming shortwave (unlike outgoing longwave) affects things other than surface temperature - in particular, it affects the rate of photosynthesis. Shifting it by the amounts necessary to move world temperatures multiple degrees would cause all kinds of problems - most obviously, worldwide famine from crop failures.
> we already passed that point a while ago, but it's mostly invisible so nobody cares
Doesn't that kind of undermine your whole argument? If the effects are too invisible for control systems to kick in anyway the marginal CO2 emissions matter exactly as much as you would expect.
It depends how much lag there is in your control system - how early do you need to begin cutting emissions in order to prevent them from rising to levels that cause significant harm? You can't rebuild your entire power grid overnight, after all.
It's also possible that "the point where it's obviously visible" and "the point where the costs force a change" are different - e.g., if the Marshall Islands flood under rising sea levels that's going to be visible and tragic but may not have much global economic impact.
Partially, yes. But at some point a control system will kick in, and until that happens you're destroying your own quality of life for no real benefit - unless you do it very conspicuously and inspire people to do the same, I suppose, and they also do it conspicuously and...
...but I'm not holding my breath for that to actually work.
> destroying your own quality of life for no real benefit
Full scale destruction is likely a bad idea but there are plenty of things that can be done without massive costs - either with some substantial (at individual scale) impact and/or prominent signalling and/or good idea anyway.
I can't add much to what others have said about trustworthiness of Cochrane, though I am likely biased in any event (you'll find my name and fingerprints on the current handbook). But I can add that the people I have known who guide the methods and standards used by Cochrane are extremely thoughtful researchers. There is little glory and even less money in that work, so it attracts a lot of dedicated purists. Not a bad thing.
Though less well known, a lot of the same methodologists have contributed to the standards behind the Campbell Collaboration (https://www.campbellcollaboration.org/), which is a kind of sibling organization for the social sciences.
As others mention, they are the paradigm of "evidence-based medicine". However, "evidence-based" often means "are there any double-blinded randomized controlled trials, and what is the summary of those trials?", so they ignore lots of evidence that Bayesians would count.
This has some obvious advantages in objectivity and neutrality and disadvantages in accuracy and detail.
I like and trust Cochrane - note it is a collaborative and relies on the collective and voluntary contributions of an altruistic clinical research community, so if you believe evidence-based medicine is a good thing, please consider joining their crowd of volunteer paper reviewers https://crowd.cochrane.org/ - you'll learn a lot and contribute to science so what's not to love?
Cochrane Crown is surprisingly simple to contribute to, thank you, I'm glad to have found this!
They guide you through a few short tutorials, and then all it takes to start contributing is being able to classify if a paper is an RCT based on the title and abstract. So this is suitable even to a lay person.
I've also found https://taskexchange.cochrane.org/ which leverages more complex skills, though most of the tasks are beyond my knowledge and abilities.
They have a good reputation, but ... this is by the apparently very low standards of medical research. Here's an editorial by a former editor of the BMJ, and also "was a cofounder of the Committee on Medical Ethics (COPE), for many years the chair of the Cochrane Library Oversight Committee, and a member of the board of the UK Research Integrity Office."
"the time may have come to stop assuming that research actually happened and is honestly reported, and assume that the research is fraudulent until there is some evidence to support it having happened and been honestly reported. The Cochrane Collaboration, which purveys “trusted information,” has now taken a step in that direction."
Note that he put "trusted information" in scare quotes, not me. So someone intimately involved with the Cochrane Collaboration is apparently not sure that they really are purveyors of trusted information.
On the other hand, if they are genuinely going to start auditing trials or just demanding data and checking it, that would already be better than what most journals are doing. So you can see it as both positive or negative.
I also think it is the gold standard for evidence-based medicine.
However, they will err on the careful side. So if a treatment is highly speculative and there is no strong evidence, then they will stress this point. This is different from bloggers like Scott Alexander, where the trade-off "coming up with some novel crazy ideas which are right 30% of the time" is perfectly fine. Cochrane does not put forward new ideas, they review existing evidence.
If Cochrane says something works, it likely works. Lots of things work without Cochrane agreeing, due to the standards they set. Things everyone agrees work can still get "insufficient evidence; more research needed".
In my area of research, they were presented during a conference as a gold standard that my field could aspire to reach one day. I have no first hand experience though.
My gut feeling (that i can't quite explain) is that they probably really are best in class for what they're trying to achieve; but that they're still behind acx and the likes in terms of quality.
Yeah was wondering the same thing. Saw the Cochrane review on ivermectin concluded the evidence is not enough to make a conclusion one way or another, whereas other meta-analyses have found an effect..
Since the advent of statins, heart disease and strokes have dropped off precipitously. There is nothing wrong with trying to lower it with diet, but if you are genetically predisposed to high cholesterol, you may find it very difficult to lower it significantly. Forgive me if you already know this. I tried to lower my cholesterol with diet and testing, and it didn't budge much. But I already was eating a Mediterranean diet and fish several times per week. Fortunately where I live cholesterol tests from a lab are dirt cheap. The biggest drawback is that having blood drawn is not exactly fun.
I use www.insidetracker.com for blood tests. They only have packages, but really you want a bunch of tests anyway to understand at least your lipid profile - LPL, HDL, triglycerides, - because cholesterol per se is not all that informative. And you probably want to have some idea of your blood sugar too. I haven't used their at home kits though, just regular ones where you order online and then go into a lab for a blood draw.
No, I'm using them every few months, and I didn't realize "home" means "self-service tests" not "mail test kits". Is this a thing that exists for cholesterol?
Every time, without fail, someone in real life mentions to me “I’m going to eat less meat and dairy and fat because cholesterol” I say “what about the hunter gatherers! They eat lots of meat and fat (not dairy, but there’s more modern herders to compare), but don’t get any heart disease or atherosclerosis and such”. And without fail, they say “that’s a good point, but I trust my doctor and idk what to do with that info”.
So I don’t think eating dairy and meat and fat is causative of “bad cholesterol” increases in any way that is unhealthy. And even without the data from various primitives, it still wouldn’t make much sense that a human diet staple that we would literally die without in nature (b12) is that harmful, especially since human meat consumption nowadays isn’t really that high by anthropological standards (iirc, not totally sure about that, ancient diets varied a lot). I’m not sure what explains all the evidence to be contrary though, lol.
Not necessarily supporting this, but some people believe it may have to do with regular fasts that hunter gatherers experience(d). It may also have to do with sugar, especially fructose, consumption, elevated cholesterol being downstream of that. And of course we can be pretty confident that exercise plays a huge role, and hunter gatherers certainly were much more active.
FWIW, I think it's partly genetic. I have always had low cholesterol, so low that's it's occasionally inspired doctors to attempt to treat it. But I've never restricted dairy, and only occasionally restricted meat. I tend to have 3 eggs a day with lots of cheese during the rest of the day. This is bad for my weight, but it doesn't raise my cholesterol.
That said, remember that they myelin sheathes around the fast nerves are cholesterol. The body makes it naturally, and, I think, the dietary cholesterol is broken down into it's components before being absorbed by the guts. So if you want to limit the dietary source of cholesterol, eat plenty of oat bran (e.g. oatmeal) and wheat bran (less effective I believe) with any meals that are high in cholesterol. Fiber tends to grab onto the cholesterol before it can be absorbed. Carrots may also be effective.
> To date, extensive research did not show evidence to support a role of dietary cholesterol in the development of CVD. As a result, the 2015–2020 Dietary Guidelines for Americans removed the recommendations of restricting dietary cholesterol to 300 mg/day.
On the other hand, we here are not living the hunter-gatherer lifestyle. I do think the notion that too much meat gives you bad cholesterol is over-emphasised, because too much of anything is also bad for you - my vegan sibling just got kidney stone and was told it happens from too much tofu and coffee (guess what they consume an awful lot of?)
Some simple points to start at maybe if you are serious about your question (from someone who knows nothing of the subject - but feels like that dismissing what other people’s doctors recommend (with a naive explanation that can probably be better informed with some search) is not rational):
- How active is modern man in their daily life in comparison to a gatherer?
- How old did hunters get to become, and at what age do we get cardiovascular issues?
- Did they actually get no issues from their diet; what evidence do we have of this?
I didn’t intend to suggest it was unreasonable to take the doctors recommendation! Just that I think the current knowledge around nutrition is messy and don’t really know what the right answer is
For the second question, while the median and mean age of death in non modern populations, whether agricultural or hunter gathered, is quite low, a lot of that is infant and younger mortality - the distribution is very wide and a significant fraction of people live to old age
This paper presents survivorship curves on page figure 2, which demonstrate that, although mortality is fairly constant, a significant number of them live to 60 and beyond. The authors hypothesize that evolution prepared people to be able to function somewhat well for “seven decades.” This does overlap with cardiovascular issue onset.
IIRC the Eskimos (diet consisted almost entirely of seals - one of the few human cultures that could truly claim the title of "apex predator") did actually get heart disease at high rates. So at the tails of 90%-100% meat there definitely is a risk. Whether there's a significantly-elevated risk from 20% meat vs. 5% meat is a bit iffier.
There are LOTS of different causes of heart disease. And if heart disease would slow you down, and you live in a dangerous environment (I'm thinking "lions and tigers and bears"), you'd be quite likely to die of something else, even if you were so predisposed.
FWIW, my wife died of heart disease, and cholesterol played no part in that. Also most hunter-gatherers today don't have diets that high in meat, and certainly not dairy. Most of the calories probably come from gathering, but the hunting was important for protein. It also probably made the neighborhood safer to live in, as in "don't let that human see you, they're dangerous". But don't expect the diet to be high in cholesterol. Wild animals are usually rather lean, so even if they were a big part of the diet (usually counterfactural) it wouldn't be high in cholesterol.
I wish I could tell you about a test, but the only ones I know involve sending blood to a lab. I, personally, wish there were a test to determine the amount of starch and sugar in a dish of food. My wife wanted an easy and reliable test for sodium (she was on an EXTREMELY low salt diet). These sorts of tests don't seem to be available though. Sometimes I can see why, but often I think it's "lack of perceived demand".
> Ethnographic and anthropological studies of hunter-gath- erers carried out in the nineteenth and twentieth centuries clearly revealed that no single, uniform diet would have typified the nutritional patterns of all pre-agricultural human populations. However, based upon a single quanti- tative dietary study of hunter-gatherers in Africa (Lee, 1968) and a compilation of limited ethnographic studies of hunter- gatherers (Lee, 1968), many anthropologists and others inferred that, with few exceptions, a near-universal pattern of subsistence prevailed in which gathered plant foods would have formed the majority ( > 50%) of food energy consumed (Beckerman, 2000; Dahlberg, 1981; Eaton & Konner, 1985; Lee, 1968; Milton, 2000; Nestle, 1999; Zihlman, 1981). More recent, comprehensive ethnographic compilations of hunter-gatherer diets (Cordain et al, 2000a), as well as quan- titative dietary analyses in multiple foraging populations (Kaplan et al, 2000; Leonard et al, 1994), have been unable to confirm the inferences of these earlier studies, and in fact have demonstrated that animal foods actually comprised the majority of energy in the typical hunter-gatherer diet
Our analysis showed that whenever and wherever it was ecologically possible, hunter-gatherers consumed high amounts (45–65% of energy) of animal food. Most (73%) of the worldwide hunter-gatherer societies derived >50% (≥56–65% of energy) of their subsistence from animal foods, whereas only 14% of these societies derived >50% (≥56–65% of energy) of their subsistence from gathered plant foods. This high reliance on animal-based foods coupled with the relatively low carbohydrate content of wild plant foods produces universally characteristic macronutrient consumption ratios in which protein is elevated (19–35% of energy) at the expense of carbohydrates (22–40% of energy).
And the low fat claim isn’t quite true either based on my skimming that paper, although might be wrong.
I also vaguely recall extremely low sodium diets being a bad idea as sodium is an important ion
Philosophers usually distinguish "consequentialist" theories (where all normative vocabulary like "should" and "ought" and "good" and "right" ultimately derives from the goodness of consequences that an act or a character trait or a policy can be expected to have) from "deontological" theories (where all the normative vocabulary ultimately derives from the properties of acts, rather than their consequences) and "virtue" theories (where all normative vocabulary ultimately derives from properties of character traits).
"Utilitarianism" is usually taken to be a species of consequentialist theory where the fundamental concept of goodness is either pleasure and pain (for traditional Benthamite theories) or some more sophisticated concept deriving from whatever it is that individuals prefer or disprefer.
There is an interesting recent line of discussion in the literature about how many deontological and virtue theories can be equivalently reformulated in a consequentialist form, if the fundamental concept of goodness is based on something like maximizing the amount of unviolated rights, or minimizing the ratio of falsehoods to truths uttered, or something else. (https://philpapers.org/s/consequentializing)
My justification for the somewhat more sophisticated utilitarian view is to start with decision theory, that shows that anyone with non-self-undermining preferences must prefer among actions in a way that is equivalent to having some underlying utility function and preferring actions that lead to higher expected utility. Any such utility function is fundamentally connected to some sort of to-be-done-ness by definition. Even if *I* only care about *my* utility function, to the extent that there is anything that *we* (counting all rational beings together) should care about, it should be *our* utility functions. It seems to me that taking the moral perspective is taking the perspective that there is something that we all should care about.
There are still fundamental difficulties regarding the fact that different utility functions don't come on the same scale, so there is no obvious way to trade off utility for some against utility for others, but this is where I have basically come to in my thinking about this.
Oh, and I meant to add - I would say that things like rights and truth-telling, on my view, get their value just because the diversity of individual utilities means that, in ordinary cases, individual utilities will be best promoted by giving individuals the information and ability to bring about what they want for their own life, and rights and truth are usually what helps with that. But when there are direct conflicts in fundamental desires (which empirically happens sometimes, but not most of the time), these can be overridden.
My thinking right now is that for *us* to care about something is for at least *one* of us to care about that thing, at least when we are dealing with a relatively unstructured group like the set of all rational beings. (For very structured groups, like teams or corporations, there are much more specific rules that determine what that entity cares about, many of which can be quite separate from whether all or any members of the group actually care about the thing.)
I think the hardest point is arguing that there is or should be any thing that this gigantic "we" cares about at all.
Short answer: Nobody has yet discovered a way to *prove* any "ought", and I suspect no one ever will.
Long answer: Every honest, sane person will agree that their own subjective wellbeing "matters" to them. Just having the experience of living through various states of better or worse wellbeing will directly confirm that as an axiom. Who cares if you can make a formal proof of it? If someone denies this fact, they're either lying or unimaginably confused. Either I don't believe them, or I think they aren't worth dialoging with. (If you fall in this camp, please let me know, as I genuinely do not want to waste my time talking with you.)
Next, it's natural to extend this knowledge to other conscious beings, at the very least beginning with other cognitively normal humans. You may claim to believe you're the only conscious being who experiences varying states of wellbeing that matter to said conscious being, but if you do, once again I'm calling you a liar. Everyone knows beyond a reasonable doubt that they aren't the only conscious being in existence, and if someone somehow doesn't, again, they aren't worth dialoging with.
You aren't bound by anything to be a utilitarian from here. You are free to selfishly disregard the wellbeing of other beings since their wellbeing is something you don't have to personally experience. This is what we call psychopathy. To the extent that you are open about this view with others, you will find that you are excluded from societal planning or discussions of what society should regard as good or ethical. Acknowledging that wellbeing matters is a universal starting point upon which to build collaboration with others. I challenge you to find any other universal starting points like this.
You are free to pull some rule like "never lie" out of your tuchus, but the only way you'll be able to convince others that such a rule matters is by virtue of its impact on wellbeing (or possibly revelation, but let's set that aside). You won't be able to justify the exceptions which *don't* improve wellbeing, except perhaps by arguing that following such a rule strictly is more likely to overall achieve better wellbeing than attempting to individually determine which cases are exceptions. But that in itself is an instance of utilitarian reasoning.
If this bothers you and you demand a formal proof to hold some particular normative ethical position, I have bad news for you: no matter how hard you search and think and philosophize, you won't find it. Many have tried, none have succeeded, and neither will you.
Do you think you have a way of proving any "ought"? I want to hear one if so.
"Short answer: Nobody has yet discovered a way to *prove* any "ought", and I suspect no one ever will"
It's quite straightforward to derive instrumental (AKA hypothetical) oughts. "You ought to do X in order to achieve Y". If you you want apply that to morality, you need to figure out what morality is for.
This seems like a softer version of "ought" than what most people mean. Many would challenge your equating "ought" with "likely to produce such results". I tend to agree with this criticism, in a strict sense. It's the equivalent of being a compatibilist on the free will question. To compatibilists I say, "okay, but that isn't what I mean by free will," and I'm sure many moral nihilists would say "okay, but that isn't what I mean by ought".
Nonetheless, I agree that once two parties have agreed to accept the instrumental definition of "ought", they can proceed to discussing which axioms to provide a basis for Y. And that's basically what I and other utilitarians are doing. My only axiom is that utility aka wellbeing matters and hence the greater it is, the more desirable it is. I think all other axioms, when divorced from their effect on utility, are absurd and drawn out of thin air.
Morality is for maximizing wellbeing. If it's for anything else, I simply don't care about it, and I instead care about whatever you call maximizing wellbeing.
>This seems like a softer version of "ought" than what most people mean. Many would challenge your equating "ought" with "likely to produce such results". I tend to agree with this criticism, in a strict sense. It's the equivalent of being a compatibilist on the free will question. To compatibilists I say, "okay, but that isn't what I mean by free will," and I'm sure many moral nihilists would say "okay, but that isn't what I mean by ought".
The compatiblist definition of free will isn't entirely wrong ..if you can't do what you want, you are lacking free will ,in a sense. But maybe not the only sense.
As with compatiblism, I find it hard to deny that "would" and "should" have instrumental uses. The question is whether the "soft" usages are the whole story.
Why do you need a hard version of "ought", and why would does it mean? As far as I can see, the distinctive character of a moral "ought" is that it is in some way categorical, universal or obligatory.
There are some things you ought to do to build a bridge, but your are not obliged to build a bridge, so you are not obliged to do them. You can say that you don't feel like building a bridge, but can you say that you just don't feel like being moral?
But "universal" isn't quite universal, because you are not required to follow most or all moral rules if you are all alone on a desert island....becsue, there ls no one to murder or steal from, or even offend.
So morality is "for" living in a society, and it is only universal.in that it applies to everyone in a society, and it is only obligatory in the sense that you can't excuse yourself from it and stay in society.
Yes, I agree that there are (at least) two different meanings of free will. Compatibilism is correct in one (the "weaker") sense. The problem is when people try to claim that being right in that sense makes it right in the other (the "harder" sense). Same goes with ought. I think what Parrhesia was asking for was more the "hard" version of oughts, and I contend that those cannot be proven.
Deriving an argument for the softer, instrumental version of ought requires a Y (in reference to your earlier use of Y), or an objective. You need axioms before proving that instrumental ought, and the only universal axiom here is that wellbeing matters.
How to make the world a better place is a bit of a first world problem. If you have the resources, it's well worth thinking about...but what if you don't.? (Note the title.of Singer's book, Living High and Letting Die...not everyone is living high) Historically, most people were living hand to mouth. If they didn't have spare resources to make anyone elses life better , were they therefore immoral (or amoral)?
But everyone has the ability to avoid making things worse.Deontological "thou shalt not" rules prevent one person reducing anothers utility by stealing from them, murdering them, and so on. So if you define morality as something that's mostly about avoiding destructive conflicts , and avoiding reducing other people's utility, then a consequentialistic deontology is the best match. Which is somewhat circular , too.
Are utilitarians utilitarian in practice? They strongly obey the law of the land, which is course deontological. And obeying the law of the land would prevent the more counterintuitive consequences of utilitarianism, such as feeling obliged to kill people in order to harvest their organs to Dave lives. So in fact, that kind of utilitarian is following deontological , thou shalt not laws, and only using utilitarianism to guide them about what they should do with their spare resources . And what they do with their spare resources is entirely an optional matter as far as the legal system and wider society are concerned...they are not going to be punished it ostracised for their choices. Yet they summarise the situation as one in which they are just following utilitarianism, not one where they primarily follow deontological obligations with utilitarianism filling in the voluntary, supererogatory stuff. (Inasmuch as they stay out of jail, they never break deontological obligations. The money they give to charity is what remains after theyve paid their taxes).
There's a "Pareto" version of utilitarianism, where you are not allowed to reduce anyone's utility, even if doing so could increase overal utility.
Not all utilitarians believe in it, and it doesn't seem derivable from vanilla utilitarianism. So it would seem to be a case of bolting on a deontological respect for rights onto utilitarianism.
People aren't "immoral" or "amoral" for not taking an action that wasn't even available to them. Obviously. No utilitarian in existence thinks that's the case. That's like saying you're immoral for not curing cancer. What a ridiculous argument.
I strongly disagree that killing random people to harvest their organs is actually a consequence of good utilitarianism (except in some very specific circumstances). I think the consequences of normalizing such an act would be horrific overall, don't you? I mean, think about it for more than 10 seconds. Calculating utility *is* extremely complicated, and it's worth exercising caution when someone suggests a seriously counterintuitive action, rather than just blithely going by what your Ethics 101 professor told you a utilitarian would do. I am not making any claim that utilitarianism provides easy answers to moral dilemmas. Simply that it is the one *ultimate* metric by which to judge which decisions are better than others.
The thing I'm mostly trying to counter is the phenomenon in which someone argues for a harmful course of action, has it pointed out to them that that course of action is harmful, and then still defends it purely on deontological (or other non-utilitarian) grounds. E.g., someone saying "It doesn't matter if this results in worse wellbeing; lying is always wrong."
Also, of course utilitarians strongly follow the law. Breaking the law, even for good reasons, tends to have bad consequences that fall especially hard on the person doing the lawbreaking. That doesn't mean that sometimes breaking the law and risking suffering the consequences isn't ever the right thing to do. But utilitarians, like any other humans, have selfish tendencies, and they aren't magically perfect actors by virtue of recognizing what the metric for a good decision is.
I don't think self describedn utilitarians want to harvest organs. I do think utilitarianism recommends it. Therefore, the "good" utilitarianism they practice is different to the textbook utilitarianism. It's not that self described utilitarians are evil, it is that they actually contractarians or rule consequentialism something.
This debate started with a question about obligation. Utilitarianism has two bad answers to that. One is that you are obliged to maximise ulility relentlessly, so that everyone except thanks is failing in their obligations. The other is to abandon obligation...so that utilitarianism is no longer a moral system in the sense of regulating human behaviour. What is needed is something in the middle.
Rule consequentialism can show that: you are obliged to follow the rules so long as they have good consequences , but the rules should not be excessively demanding.
So the problem is soluble, so long a you get deconfused about what utilitarianism is and isn't.
I'll add that classifying decisions as "moral" or "immoral" isn't even really a thing in utilitarianism. There are gradations of consequences. A given decision can be very good, but not optimal. If you want to define anything less than optimal as "immoral", fine, but I don't think that's helpful or useful.
>but the only way you'll be able to convince others that such a rule matters
My general model for how moral reasoning is supposed to go is "what is good?" -> "how do I achieve the good?" which inherently involves "how do I convince others that the good is good?". Here you seem to be arguing that X is not good because others cannot be convinced of X's goodness, which inverts that priority.
It's kind of hard to argue base axioms like these, but I don't like the ethical edifice which results from "X is good iff people can be convinced X is good". For the most obvious example, Hitler did a pretty good job of convincing people that anti-Semitism was good; more generally, it reduces ethics to "whatever is most memetically fit" and I find that profoundly ugly.
(Also, a lot of people do seem to be deontologists, so I'd question your assumption that Kant is doomed to get no traction ever.)
No, I'm arguing that X is not proven to be good until a good argument exists that proves it's good (except of course by virtue of utility). And because of that lack of a good argument, I'm merely pointing out that you won't be able to convince anyone that X is good unless they already happen to agree that X is good. I invite you to make arguments for why X, Y, or Z are good, divorced from impacts on utility. I just doubt you'll make a good argument for any of them. I've yet to hear one.
People are more or less born into deontology. You can't just tell a kid "do what maximizes utility" because kids are stupid and lack the experience and reasoning capabilities to execute that instruction well. You get better results explaining to them that there are rules that must be followed, so we pretty much all start out from there. Many people come to realize that such rules aren't *inherent* truths and become consequentialists as they grow older and wiser. I don't think I've ever heard of someone going from consequentialist *to* deontologist though, except in cases of people who adopt a new religion which has prescriptive ethics based on some irrefutable revelation or something. Which brings up a broader point - many people's deontology is tied to their religious beliefs, as basically every religion tries to define ethical behavior with rules and aphorisms and whatnot. And lots of people are religious, though fewer nowadays than back in Kant's day.
>I don't think I've ever heard of someone going from consequentialist *to* deontologist though, except in cases of people who adopt a new religion which has prescriptive ethics based on some irrefutable revelation or something.
I used to be more of an ethical hedonist than I am now. Utility monsters are annoying and there are some perverse incentive problems.
>I'm arguing that X is not proven to be good until a good argument exists that proves it's good (except of course by virtue of utility).
The problem is that there isn't a good argument that proves utilitarianism is good either, and I can't distinguish your argument for using it as a starting point ("people agree on it and will ban you from everything if you disagree") from memetic Darwinism and/or argumentum ad baculum.
Really? What are those perverse incentive problems? If utilitarian reasoning is leading you to worse outcomes than other forms of reasoning, it just means you're doing utilitarianism poorly.
I agree that no argument whatsoever *proves* any moral theory. Call me a moral nihilist, I really don't care. But from that position, I'm imploring you to recognize the universally acceptable foundations of utilitarianism. Call this intuitionism, again, I don't care. The point is that the only grounded reason for any decision being "better" than another must reference back to how it ultimately affects wellbeing. Nobody can sanely deny that that consideration matters. *Some* people might feel there are other considerations, and I try to make those people recognize that their reasons in support of those other considerations are always either A) referencing back to utilitarian considerations, or B) just because. And if their reason is B, I've usually hit a wall.
>The point is that the only grounded reason for any decision being "better" than another must reference back to how it ultimately affects wellbeing.
No. There are multiple moral foundations - Haidt's classification is care/harm, liberty/oppression, fairness/cheating, loyalty/betrayal, authority/subversion, and sanctity/degradation.
It is true that among the WEIRD, the latter four are atrophied (to a greater or lesser degree). But this is not a proof that they are meaningless, just as the existence of psychopaths is not a proof that morality entire is meaningless.
(I feel obliged to note at this point that ethical hedonism is only one form of consequentialism; there are ways to bake the other foundations beyond care/harm into a utility function, or to value consequences without the use of a global utility function.)
The perverse incentive problem is that in an ethical-hedonist society one is incentivised to, as far as is possible, convert oneself into a utility monster and/or present oneself as a utility monster. There are ways around this, but they all essentially boil down to implementing fairness/cheating.
One slight correction: some people are convinced by bad arguments. So I suppose you will be able to succeed in convincing some of those people that X is good in the absence of a good argument for why X is good.
I agree with most of what you say. I think that utilitarianism or rule utilitarianism covers 99% 0f issues. My "ought" question is why "ought" I include others in my utility function? I do or think that is my reason, because I am a Theist, specifically a Catholic Christian.
Interesting. I used to be Catholic, and when I was, I approached ethics from a less utilitarian position than I do now. Not completely deontological, but I was more in the habit of referencing moral "rules" and weighing them against one another than I am now. This seemed consistent with the dogma element of Catholicism.
As for why I decide to include others in my utility function, it begins with the recognition that those others possess consciousness and also experience suffering and joy. I simply cannot bring myself to believe that my pain matters more than another being possessing the same capacity to experience pain (even though I, of course, often do prioritize my personal wellbeing over that of others when it comes to taking action). Alas, this is an understandably natural tendency for us humans, but I can at least recognize it for the irrational, biologically driven bias that it is. Part of why I advocate for utilitarianism is just that it's practical. Recognizing that utility matters to every individual is something everyone can do. When we profess to care about utility, we are saying to others "I value your wellbeing as you value your own, and I expect the same in return." And that's a pretty easy thing to get on board with. It's pretty darn close to the golden rule, which is perhaps the most well-accepted ethical rule I can think of. But once you throw in weird additional specific rules, like "don't lie", that's when people start jumping ship.
Some people do also possess the intuition that lying is *inherently* wrong, and, well, that just leaves the rest of us scratching our heads. Like what does it even mean to be *inherently* wrong? In what way? Because it checks the wrong box? It seems to me like something like that could only be true if theism were correct and some omnipotent being did indeed reveal such a cosmic truth via revelation.
Funny thing, but even an omnipotent being doesn't solve the issue. Suppose that Gods reveals Themselves and tell us that doing X is objectively inherently wrong, unrelated to any our intuitions or utility calculations. What would that mean?
If God would punish us for doing X or reward us for not doing X that would affect our utility calculations and be meaningful. But otherwise why would we care what's Gods point of view regarding X?
I was imagining a world where omnipotence extends to the ability to objectively define concepts. But in a strict sense, I agree with you, and doubt such a world is even conceptually possible.
Yeah. I'm just really amused by how eager we are to assume that the existence of God would solve some philosophical problem but as soon as we think about it some more, it becomes clear that this actually would change nothing.
As far as I know, there is no Catholic consensus that consuming marijuana is a sin. I imagine the interpretation is similar as with alcohol, and that there are subtle differences between responsible, acceptable use, and sinful indulgence in which you allow some of your moral decision-making abilities to be weakened.
But I agree in general that Catholicism is a dogmatic religion and has things like the catechism, which define wrongdoing rather explicitly, and without reference to consequentialist reasoning.
I think a fair number of utilitarians would also caution against marijuana use, pornography, and other similarly indulgent activities in lots of circumstances. It is totally possible that seemingly harmless pleasures could have insidiously bad effects for utility when everything is taken into account.
But I suspect that most people only have the intuition that "lying is bad" because lying often leads to less utilitarian outcomes. If lying somehow consistently lead to positive consequences, would you still have that intuition? Same goes for "don't violate someone's natural rights". In most ordinary real-life applications of that rule, not violating someone's "rights" IS the utilitarian thing to do, and so utilitarians can still earnestly promote it, at least as a useful rule of thumb if not an absolute moral principle.
I'm curious if you have any ethical intuitions that can't be tied back to utility in this way. Is there any ethical precept you would advocate for, that in most ordinary real-life applications of it (as opposed to in thought experiments concocted by moral philosophers), tend to produce outcomes that run counter to, or at least orthogonal to, utilitarian concerns?
Sorry, but I completely fail to see how saying "I have this intuition" translates to an ought.
But fine. My intuition is that we "ought" to maximize utility, and that's my only moral intuition. The fact that it intuitively seems like I should act to maximize utility is a good reason to maximize utility. You could say this about anything. Every person has the intuition that wellbeing is something that matters, and if they claim otherwise, you can simply disregard them as crazy or a liar. No other ethical intuition is as universally shared. Not even close. "Lying is bad" is a qualified belief of mine only by virtue of the fact that I think it is often harmful. In cases where it seems likely to be net beneficial, it simply isn't my intuition that it's wrong. I strongly disagree that the blanket statement "lying is bad" is a pretty universal intuition **regardless of utility maximizing considerations**. If lying was something that was known to tend to result in good outcomes and happier people, wouldn't everyone's intuitions would be that lying is good? It's precisely those utility maximizing considerations that make it intuitively feel wrong to most people. Ethical intuitionism could very well be an effective strategy for maximizing utility, but that doesn't mean the intuitions themselves objectively point to any oughts. You can't prove an ought. Deal with it.
Most people absolutely rely on utilitarianism whether or not they consciously realize it. Whenever someone recognizes that one of their moral dictums leads to a repulsive conclusion in terms of wellbeing, they almost invariably call upon some other moral dictum which "intuitively" feels like it has priority in that instance. Funny how that happens. You do it too. From reading your comments on less meta topics, I do find you to be really insightful and thoughtful. You seem to reason like a good utilitarian, in my view. But then when you explain the reasoning behind your reasoning, you call upon all this extra fluff, and I'm completely baffled as to why.
And no one calls you a psychopath because I bet you don't explicitly tell people you disregard their wellbeing. Of course, I doubt you actually *do* disregard other people's wellbeing in the first place. I wasn't talking about you there. Just a hypothetical person who actually wants to dispute that the wellbeing of others matters, my point being that practically nobody will take that view.
Here on ACX. Don't think I've heard of DSL. And thanks to you as well! This is one of those topics that can get me worked up, so I hope I didn't come off too harshly.
Utilitarianism can't tell you what to maximize, it just tells you how to maximize.
Most/many utilitarians use simple utility functions around things like 'reduce suffering and maximize flourishing' or w/e, but that is an arbitrary decision made because those seem like good ideas to the type of people who are utilitarians.
Taboo "objective morality" and ask your question once again.
Is there some set of rules, condeming actions good or bad, unrelated to our intuition and utility calculations which is somehow supperior and which we should follow even against our own value system?
No.
Can our values, moral intuitions and proposed utility functions be presented as an approximation of some other utility function, which we would prefere to follow if we knew better and were the kind of people we wish we were?
To the extent that morality is applied game theory, for which we have both evolved intuitions and cultural consensus, it makes sense to claim that it has an objective basis. Beyond that, any nuances that arise from idiosyncratic circumstances of a certain society or the human condition as a whole are essentially arbitrary.
He may not, but I do. Normatively speaking, anyway. I guess there might be a "true" "objective" morality in the sense of some sort of average across each individual's morality, and I suppose that your intuitions are probably accessing something like that (limited in some way to the individuals that make up your cultural heritage or whatever).
You can't know your premises to be true. You choose your 'oughts', you can't discover them from the outside world - there is no 'ought' in physics. From an utilitarian perspective - meaning having already chosen that we ought to maximize utility - follows that sometimes we should lie because lying maximizes utility (by preventing the axe-murderer from murdering your friend) and, uh, I don't know how you define natural rights but assumedly there are situations where they should be violated to maximize utility.
Let's say you decide to be a counter-Utilitarian - you want to maximize suffering instead.
I can't prove you wrong using logic or science. I *might* be able to say that you don't seem to act that way in practice, and for instance avoid suffering for yourself. But maybe you don't do that, and you actively seek out suffering for yourself and others. I could argue that this against the moral intuitions of the overwhelming number of people, but you're within your rights to shrug and say that they're wrong. I could argue that hardly anyone *wants* this, but if you don't care what people want, why would it matter? Ultimately, I may not be able to convince you.
Would you be wrong? There are two answers here. I would argue that you're *mistaken*, and that we should not in fact maximize suffering (but I can't really prove it to you). And I could also say it's extremely likely that if you acted in accordance with your counter-Utilititarianism, it would outcomes that I, every other Utilitarian, and the overwhelming number of people, find bad, and that such actions are immoral. You, of course, would not agree, and you would find Utilitarians as abhorrent as we find you in this thought experiment.
If you just want to maximize the amount of Truth-statements, or perhaps minimize the amount of False-statements, that wouldn't be nearly as abhorrent, but it seems like a super weird objective. The kinds of things you should do if this is what you want seem absurd. It's just one step beyond paperclip-maximization.
1. This is true. I don't. But I would say that this makes me a bad utilitarian, the same way some sinner could be a bad Christian.
2. I would disagree about this. The utilitarian solution to the trolley problem, for instance, disagrees with Kantian and Christian morality, but agrees with the moral intuition of the large majority of people. If you wanted to test moral systems against intuitions, I don't think you would find anything that beats utilitariansim. Medical ethics lean _strongly_ towards utilitarianism. You have to construct very complicated situations - the Fat Man might be one - before moral intuitions start to go strongly against utilitarianism.
3. True, and this is very interesting. Christian morality is motivating - you will burn in Hell if you don't do the right thing. But utilitarianism isn't - it's abstract and doesn't push itself on you. To quote Brave New World: “Happiness is a hard master – particularly other people’s happiness. A much harder master, if one isn’t conditioned to accept it unquestioningly, than truth”
4. Again, I disagree. It's highly unlikely that the Fat Man has ever *actually* happened in the history of mankind. Meanwhile, it's trivial to come up with real, actual examples where lying is the right thing to do.
Not wrong at all, because there is no "right" and "wrong". Those are concepts invented by the human mind. There's no stone tablet of rules in the universe about what is right and wrong. I am simply a utilitarian because, when peering into my soul and asking myself "what rules are in the stone tablet there?" i see
1. I am conscious
2. My suffering matters
3. Suffering is bad
From those, you can get to utilitarianism, assuming you also see those rules in your soul. But if you see different ones, well, then you'll see different "rights" and "wrongs". And no one can say, objectively, your right is right, because there is no objectivity here. Moral relativism (at least in the weakest sense), is true.
Actually, it might be more correct to call this moral nihilism instead of relativism. Moral nihilism is true. The weak form of moral relativism (everyone thinks they are the good guy in their story) is true. The STRONG form of moral relativism (everyone is the good guy in their story and thus we shouldn't judge them) is false.
Cultural and moral relativism would not typically apply, although there could be situations where a certain act results in a net positive in some society or time but not another. It's possible a medieval monk would suffer more from not being allowed some self-flagellation, for instance, even though you should stop your kids from doing it.
But this is just because different things can cause different amounts of happiness or suffering depending on context.
Oh no i agree, i meant relativist in the kind of universal sense. Like, if someone tortures a baby, all of humanity agrees that is bad. But there is no objective metric we can use to say that it's bad, just our "intuition". God isn't going to come down and say NO. BAD.
You mean, "why be a utilitarian?" My answer would be that happiness, unlike truth-telling, is *inherently* valuable. I can't use science or formal logic if you disagree, but it seems easy to set up a situation where telling the truth would go against moral intuitions, common sense, or being a decent person.
I recall seeing a collection of links from Scott about alternative healthcare options or navigating the healthcare/health insurance system if you're not in an ideal position. Does anyone happen to know where that was or have a link?
If anyone is looking for interesting content across tech, media, art, finance feel free to check out: https://gokhansahin.substack.com/p/curated-content-for-busy-folk-45
Here's a slightly more mathematical/coding based puzzle.
Consider the following snippet of pseudocode. What property do you think "test" might test for?
if( (trailing_zeroes(x) & 1) | (((x >> trailing_zeroes(x))&7)^1) ==0): test(x)
Explanation of the components for non programmers:
&,| and ^ are binary and, or and xor, so 3&5 =4, 3|5 = 7, 3^5 = 6.
>> is rightshift - that is, x>>n is the floor of x/2^n
if(a==b): c will do c if a and b are the same number, and do nothing otherwise.
(trailing_zeroes(x) & 1) is 0 if there are an even number of trailing zeros, 1 otherwise. ((x >> trailing_zeroes(x))&7)^1) strips the trailing zeroes, keeps the second and third digits, and sets the remaining to 0. Hence this tests if x = (y*8+1)*4^n for some nonnegative integers y, n.
How does 3&5 = 4? 3 = (0)11, 5 = 101; 3&5 = 001 i.e. 1.
Yes, sorry. I probably shouldn't try to do binary operations in decimal in my head.
you’re right prob typo
I’m assuming trailing_zeros is the number of least significant zeros on a binary representation before you get to a one? Which is then I for the largest 2^i dividing the number. x >> trailing zeros is then x with all its factors of two divided out.
turns into (1 if i is even otherwise 0) | (first three bits of x without factors of two, with first bit set to zero as it must be 1 unless x was zero). Assuming x isn’t 0, the second part tests if the first two bits after the first 1 are both 0s. This is true if x, minus all it’s factors of 2, is equal to 1 mod 8. Bit wise or is logical or as it’s operating on either 0 or 1. So, uh, tests, if x = (2^p * s) and s has no factor of two, (p%2==1) || (s%8 == 1)? Don’t thing I quite figured it out. Does this have some number theory significance? Did I get it right?
Almost, but not quite. And yes, it's (very basic) number theory, in a sense.
Got nothing beyond that, oh well. I’ll try the hyper cube one
Does anyone knows if 'Boss As A Service' is still a going thing, accepting new clients? I did sign up but haven't heard anything. Also, any recommendations for personal accountability? Beeminder isn't doing it for me, either because I need an actual person or because I need to set individual goals for each week (not a generic weekly target).
Anyone know if any good meetups for learning solidity ? I’m already on blockchain NYC, and it seems good, but they don’t meet as often as I’d like. I’d also like to hear about any good discords for solidity novices.
Scott what should I put for this question? https://www.metaculus.com/questions/6554/astral-codex-ten-mentions-this-question/
Any way to see how Scott voted on the question?
My guess is you'll look at whatever the community prediction is and then use an RNG. I hope people go high.
Any interest in a diplomacy game? We (over at DataSecretsLox) are trying to get a quorum together. We have five. Need two more.
I think the following post is _not_ about politics. If Scott disagrees, I apologize in advance, feel free to delete this without feeling bad about it
The other day I wrote up some thoughts about the latest spike in the covid pandemic, some musing on the reliability (or lack thereof) of data, and the value of using common-sense heuristics with appropriate uncertainty. The folks on Facebook mostly ignored it, but maybe you all would appreciate it more. It is reproduced below
----
Disclaimer: the following post is not meant to imply anything beyond exactly what I am saying. Please don't assume I have other subtextual or connotative conclusions.
Austin is in the middle of a covid spike and there are a lot of different (and conflicting) messages coming from various authorities and data sources. How do we know what sources to take seriously? What should we believe?
Personally, to square this circle, I've taken a handful of heuristics that have served me well. One of them is the presumption that covid is seasonal, like basically every other respiratory illness. Based purely on this heuristic, on July 30th, I made the following prediction:
The current spike in Austin will peak between August 12th and August 18th, and fall afterwards almost as quickly as it rose.
Why did I make this prediction? Last year in July, the spike (as measured by the 7-day rolling average of daily hospitalizations) peaked 13 days after entering stage 5. Last year in December, the spike peaked 19 days after entering stage 5. We entered stage 5 on July 30th, 2021.
Sure enough, it is now August 15th. And what actually happened? The (7-day rolling average of) daily new hospitalizations were rising very quickly at the start of August. Around August 6th, the rate-of-increase began to shrink. The hospitalization rate peaked on August 11th at 83.6 (Higher than the July 2020 peak at 75.1, lower than the Dec 2020 peak at 93.7). It has been falling ever since, at 78.7 today. If it continues to fall at the same rate it is falling now, we will leave stage 5 somewhere around August 22nd.
(Note: August 11th was only a few days ago, and there is the possibility that this is premature. However, since I'm looking at a 7-day rolling average, any trend needs to exist for a week-ish before it shows up in the graph at all. This gives me confidence that this isn't a blip, but a real trend).
(Added for ACT comment: I wrote this two days ago, when the most recent data point available was Aug 13. The next four days of data continued the downwards trend, albeit with a blip today. My prediction conditional on "fall at the same rate" is likely not correct, however we appear to still be on a rapid downwards trajectory)
I would like to propose some simple questions to all of you. I am not trying to argue or convince anyone of anything; Facebook is not the correct place for that. I'm simply curious what thoughts other people have put into this, and would like to politely suggest consideration of some things people may not have considered.
1) Is my above characterization of this spike fair and accurate? For the record, I am going primarily off of the city's data dashboard, available here https://austin.maps.arcgis.com/apps/dashboards/0ad7fa50ba504e73be9945ec2a7841cb
2) How bad did you think it was going to be this time around? Better than this? Worse than this?
3) What were the trusted authorities (whichever authorities you choose to trust) saying about how bad it was going to be? Why were they saying what they were saying? Were they accurate?
4) Was my prediction from three weeks ago accurate?
Consider the following: There are a lot of people out there, with fancy credentials, decades of experience, deep domain knowledge, and formally recognized positions of authority on this pandemic. They have made various claims about what will happen, and justified those claims with elaborate appeals to their expertise. Meanwhile, I'm not a doctor, I have no statistical models, and no deep reasoning. I just have a simple heuristic: "it'll probably be the same as it was last time". Assuming that you agree with (1) above, my predictions in the past few weeks have been more accurate than every single public health authority I've paid attention to.
During the pandemic there has been a lot of talk of "trust the science" or "trust the experts". Science is very important and frequently correct, and trusting it is generally a good idea. But more important than trusting it is understanding it, and I've been somewhat disappointed in what I've seen in that regard. Part of understanding science is understanding its limitations, and its biggest limitation is time, effort, and data. Science takes time and requires a large amount of data and analysis before we can make strong and confident predictions. Despite today being March 533rd, 2020 (https://calendar2020.noj.cc/), 500 days isn't that much time by scientific standards (something something Zooey Deschanel). And despite our increasingly quantified society, much of our data is still very noisy and less objective than we'd like to believe. While we can, should, and are doing all the hard work of scientific analysis to better answer these various questions, we should also be modest and be careful not to overstate our confidence in unreliable data.
It's easy to be bamboozled by charts and graphs and numbers. But at the end of the day, things still have to make sense. Things are roughly consistent over time and space. Magic doesn't happen. Everything has to be coherent with itself and each other, and unprincipled exceptions to basic principles rarely if ever happen. All the data in the world won't help you if that data is noisy, unreliable, or incomplete. Meanwhile, basic common-sense rules of thumb, based on the above simple principles, taken with the appropriate amount of uncertainty, is frequently a very good way of making accurate and well-calibrated predictions.
I'll leave you all with a parting question. Our leaders invoke complicated and elaborate statistical models, detailed research papers, and decades of subject matter experience and they make one prediction. I ignore all of that, and make a different prediction based on nothing other than the basic scientific principle that patterns are real. Their predictions took millions of dollars and thousands of man-hours of time to make. My prediction took about 15 seconds of eyeballing a graph (copied below, so you can check my work). My predictions are more accurate than our experts'. So the question is: What do we do with this information? If basic rough guesses are more accurate than all that scientific work, how should we respond to this in a way that gives us the best predictions and information going forward?
https://files.catbox.moe/xa0jt6.jpg - annotated graph of 7-day rolling average of daily new hospitalizations, taken straight from the City of Austin's Staging Dashboard
> my predictions in the past few weeks have been more accurate than every single public health authority I've paid attention to.
What did they say?
I think the problem is that if we just look at 15 seconds of eyeballing a graph, we end up with COVID is just another swine flu, SARS, Ebola, West Nile, etc and won't really affect the US. Was it a black swan or were we looking at the wrong graph?
Also is the scientific prediction predicting the graph or causing it? Is the Covid season "the July and November/December holidays" or is it "when things are so calm everyone relaxes" followed by the thing that ends Covid season: "people panicking"?
I personally have no clue.
A brief interlude at AI Defense in Depth, to introduce our youtube channel:
https://aidid.substack.com/p/gestalt-communications
Just discovered Ted Gioia, one of my fave writers on music, has a substack - https://tedgioia.substack.com/people/4937458-ted-gioia. Anyone interested in music should check it out. A good sample: I didn't know James Joyce was almost a famous singer - https://tedgioia.substack.com/p/how-james-joyce-almost-became-a-famous
What are your favorite newsletters other than this one?
Persuasion. I don't agree with or like every article they put out, but I value the diversity of voices there.
https://www.persuasion.community/
Max Gladstone. I like his analysis of narrative and story construction.
https://maxgladstone.substack.com/
My 10 year old daughter has gained weight during the pandemic, mostly as a result of her exercising and playing less. She went from slim to chucky, with an obviously fat stomach and face. Now I'm concerned that her body is defending this new setpoint.
Does anyone have any advice for helping her lose weight? Yes, I am trying to get her involved in sports, but it's been slow so far. We cook almost all meals at home and have a fairly healthy diet.
Look up rec. calorie intake for her height and age, try to give her that in healthy food (whole grains, ferments, colorful vegetables, etc. Also remember that some more caloric foods like meat and cheese are more satiating than an equal amount of vegetables or carbs. Kids seem to benefit from having proteins and fat in their diets.)+/- a bit, and try to get her exercise every day.
She doesn't need to be doing squats or something, just take her on a walk around the park and encourage active play.
Basically, as long as you don't let her eat processed snacks and deserts and/or sweetened drinks (Fruit juice counts! IT's about as bad as soda!) all the time, you'll probably be fine.
Fruit juice isn’t necessarily as bad as soda. It depends a lot - you can make “fruit juice” at home by just grinding up whatever fruit you have and putting it in a cup (maybe more of a smoothie), and there’s a range between that and “sugary water with extracted fruit flavoring added”. I’d still recommend fruits over juice though, and most juices tend towards the second.
I stand by my point with an *, that being; if it's transparent/pulp free, it's in the same category of badness as soda.
Umm I'm not sure why people still don't know this, but weight loss is 90% diet. So keep involving her into sports (as it's very healthy), but for weight loss just keep cooking her healthy food, make sure she doesn't eat too much BS outside from that, and she'll lose weight
Her diet did not change, but her outdoor activities did. I mean, is possible she eats more now than then, but the types of foods have not changed. And we eat a quite healthful diet.
Don't you have control over how much food your daughter eats?
(I'm not suggesting outright denying meals, but more measured portion control. And I'd suggest doing the maths, as others have suggested, to ensure she's actually obese before considering these kinds of measures - the classical "overweight" range is not actually harmful.)
I don't have specific advice but I would disagree with the people saying not to worry about it. Being fat in school is horrible and being fat in general is horrible ( I am fat ). I started putting on weight when I was 11 after I stopped playing soccer and it has been a constant blight on my life, my parents made no effort to help me lose weight or stay in shape after I started gaining and it is basically my only complaint about how I was raised. Good luck.
It depends if she's putting on fat because of over-eating and lack of exercise, or if she's putting on fat because her body is preparing for puberty. If she is not grossly overweight by the BMI for her age and height, then I wouldn't worry too much.
Stressing out over weight at this age will only sow the seeds for eating disorders later in life. Cut out junk, get more exercise, watch and see how her body responds. Making a big deal out of it or neglecting it are both bad approaches. Making a ten year old self-conscious about "I'm too fat" leads to a host of bad results, even if the intentions of the parents are good.
It's true. I managed to lose a lot of weight and since regained it, when I was skinny people treated me differently. It's only attractive people who say looks don't matter.
Back in the day that was called "puppy fat". If she's ten, then her body could be gearing up for puberty (average age for girls is eleven). If you're concerned, then the advice seems to be to check BMI and compare her weight to the average for her sex and age:
http://pro.healthykids.nsw.gov.au/calculator/
Average weight for 10 year old girl - 70 lbs
This seems right on, I (male) did this too (around 11-12) and ended up a very skinny teenager/young adult. I only assume the pounds I put on the last couple of years (mid-thirties) are only just preparing me for another growth spurt.
I'd hazard a guess that her body's getting ready for a growth spurt. Wouldn't worry too much about being svelte at 10 when she'll be morphing into a teenager over the next few years.
I saw someone on Twitter who disputed a medical bill that he received by arguing that they had given him an extremely complex code when a simple one was necessary. It occurred to me that would be an amazing public service if there were a publicly searchable database of all billing codes. I believe that new legislation now allows all patients to read their charts, which would enable them to access these codes. If not, who could create it? Would it violate some sort of intellectual property to create a site like this?
Spoke to doctor's office, asked them what vasectomy would cost. They told me cash cost. I asked how much it would be on insurance, they said they had no idea it is based on my plan, etc. I said no, how much would you charge my insurance based on negotiated rate, I could figure out my share from there (which would be all of it, HDHP). Doctor's office said they had no way of knowing until I go in for an initial consultation (which I would be charged for) but I could call my insurance.
Called my insurance and they explained they were legally prohibited from disclosing the negotiated rate with the provider. I could call the provider back and get the exact ICD code they would use and then call the insurance back and they might be able to tell me what is the generic (not specific to that provider) negotiated rate for that ICD code.
This is a common, completely elective procedure that should be the same as LASIK or plastic surgery, but whatever. I ended up going out of network with someone who told me a reasonable cash cost.
Anyway, just something to say that even if ICD codes were easier to navigate there are a number of roadblocks in the way of consumer empowerment.
Good news. Already exists. ICD-10 is the current diagnosis code list (in the US). You can find quite a few look-up websites with a google search.
warning: not a lawyer
"Would it violate some sort of intellectual property to create a site like this?" - as far as I know no, but HIPAA specter hovers over it.
Are you liable if someones enters own codes without understanding how much it reveals?
Are you liable when someones entered code of their mother / friend / employee?
Are you legally liable when someone entered lies and someone else get cofused? (you are definitely sueable as usual and it may be not straightforward to dismiss)
> If not, who could create it?
Anyone? For start it could be a Google doc or standard wiki install. There are many places where it could be hosted for free, for example wiki on an empty Github repository.
There is a slim chance that it would be in scope of existing projects like https://www.wikidata.org/wiki/Wikidata:Main_Page (you can ask on https://www.wikidata.org/wiki/Wikidata:Project_chat )
To be more specific: individual facts are not covered by copyright and in USA uncreative collections of facts are also not ( https://en.wikipedia.org/wiki/Database_right are not a thing in USA )
And still, entering it code by code likely would not trigger database rights at all.
Related to this question, are there any services out there that automatically submit similar claims on behalf of patients? I know there are services that contest tickets and have a pretty high sucess rate. Something comparable for medical bills would be significantly more useful, assuming that contestable overbilling happens moderately often.
I have read a few defenses of logical positivism lately, not because I was seeking them out but rather I happened to come across them. In particular, Liam Kofi Bright has appeared on a few podcasts arguing for it, and Scott himself has written something kind of like a defense of it. Wikipedia, however, describes it as "dead, or as dead as a philosophical movement ever becomes", and as far as I understand there were some slam-dunk criticisms of it that really did show it being untenable (although I personally have not looked into them very much yet). Is there now a resurgence in logical positivism, or did I just happen to stumble across the only two people who will defend it?
It's typically more Popper, Quine, and Kuhn that were considered to have put the nail in logical positivism. They may or may not have been "correct" full stop in their own ideas, but by at least trying to formalize science by reference to the actual practice of scientists, rather than trying to derive it from first principles, they got a lot closer than logical positivism.
If that didn't do the trick, then information theory and statistical estimation and learning should have put the truly final nail in the coffin. It's pretty clear at this point that statements don't need to be empirically verifiable or logical provable to convey information.
I'd like to learn more about this!
"If that didn't do the trick, then information theory and statistical estimation and learning should have put the truly final nail in the coffin. It's pretty clear at this point that statements don't need to be empirically verifiable or logical provable to convey information."
I wonder. I do statistics for living, and all examples where statistical inference says anything about world needs data that originate from empirical (including observational) sources -- and speaking of observations, one can get only a limited view into causality with observations alone without conducting experiments. I surmise I could subscribe to some sort of statistically informed logical (or rather, probabilistic?) positivism, if someone would formulate such a theory.
I have not read Quine nor Popper's philosophy of science. I have not exactly read Kuhn either, but the meaty points of Structure of Scientific Revolutions are described in many textbooks and reviews.
What I have understood of Kuhn and other late 20th century philosophers of science seem to pose descriptive theories that seem more sociological, and thus say nothing useful to a scientist (or anyone else) who'd like to know more about how to get closer to truth.
In sociological sense, Feyarabend might very well be right in the sense that one can find some corner of "science" where anything goes if it is the fashionable way of doing things, but it certainly does not sound very good way of getting better at science.
Logical positivism had one inspiring thing in it, the confidence that there is a scientific method to be found. Looking at everyone involved from a century afterwards and without getting too deep into their differences, logical positivists start to look not so different from Popper's falsification.
I don't think the takeaway should be that logical positivism had nothing at all going for it. The problem is that a core claim was statements that can't be empirically verified or logically proven don't have meaning or information content. Probabilistic statements, even if the probabilities are derived from empirical observation, inherently can't be empirically verified. They can only be estimated. You can't actually run all possible worlds in parallel to measure the density of observed positive trials. That is what I meant by them putting a final nail in the coffin.
I definitely think you should maintain confidence that there is a scientific method to be found. It just needs to be found by observing what scientists actually do and what seems to work better than not. Ironically enough, logical positivism failed because it wasn't itself based on empirical evidence. I think what we see with science in practice is falsification more often than verification, and probabilistic statements more often than absolute statements.
> Probabilistic statements [..] inherently can't be empirically verified.
Sounds like a poor definition for "empirically verified". On the contrary, I have not seen empirical verification which isn't statistical statement. That is why I suggested a *statistically informed* reformulation of positivism.
>It just needs to be found by observing what scientists actually do and what seems to work better than not.
Sounds great, except, what if I happen to be scientist and I want to get better at it? This is exactly why I think the sociologically-oriented philosophy does not seem to have anything to offer.
I think Godel put the nail in this theory since there will be statements that can’t be logically proven but which still have meaning. I could be mistaken, but I had thought his incompleteness proof was an example of this.
AFAIU Godel only proved the risk for incompleteness if you care for self-referential propositions. If you can live without those, you can have completeness. I don't see why we should care for those too much.
A motivation to care for them can be provided by induction. If you don’t care for about mathematical induction, you could not care about them. But Godel was specially talking about arithmetic so he the we’re important over that domain. If you never want a formal proof that uses induction then again, you wouldn’t have to worry. But I don’t know how far hon would get practically with that scheme.
I enjoy the irony that you can only say this through the use of self-referential propositions.
Godel put the nail in the coffin of mathematical logicism, but it looks like even after Godel there were logical positivists. This seems to be some sort of contradiction for the reasons you mention, so I'm not sure how this happened. I'm looking for Liam Kofi's response to Godel, but most of his work talking about positivism is in podcasts instead of easily searchable text.
I'm in Zurich for the next two weeks. Is there any meetup here during that time period? I'd also love to grab coffee or something with anyone here. Let me know if you're interested!
I have been trying to follow the war in Ethiopia through reading any new articles I come across, but I feel like they never give a very complete picture. A few things seem very odd to me. The dispute escalated very quickly from timing of elections to a pretty brutal civil war, for instance.
Is there a good long form article that gives a good overview of the Ethiopian war so far? Or maybe a report from a think tank/gov agency or something?
"The Red Line" is a podcast about geopolitics. They recently had a 90 min episode where three experts gave an overview over the conflict.
title:
E48 The shattering of Ethiopia (The war in Tigray)
Thanks Heagar. I just listened to it. That was exactly the sort of thing I was looking for.
"DSM Review: The Meanings of Madness: The Diagnostic and Statistical Manual of Mental Disorders, or DSM, features nearly 300 diagnoses. What’s the science behind it?" By Stephen Eide | Aug. 15, 2021
https://www.wsj.com/articles/dsm-review-the-meanings-of-madness-11629062194
"If there really is a mental-health crisis, then doctors—psychiatrists—should have a lead role in responding to it. Rutgers sociologist Allan V. Horwitz’s history of the “Diagnostic and Statistical Manual of Mental Disorders,” or DSM—the medical field’s definitive classification of mental disorders—explores psychiatry’s claim to such authority. “DSM: A History of Psychiatry’s Bible” puts forward two arguments: first, that the DSM is a “social creation”; second, that we’re stuck with it.
Much of the article is behind a paywall - do you think there's stuff behind the paywall that's not already covered by, e.g., Scott's Ontology of Psychiatric Conditions series? (https://lorienpsych.com/2020/10/30/ontology-of-psychiatric-conditions-taxometrics/)
My impression is that there's a very well-established critique of the DSM, which is that the people writing it and using it do not think very clearly about the differences between spectra ("tall" vs "short", where the label just describes which end of a distribution you're on) and taxa ("has the flu" vs "does not have the flu", where the label corresponds to a real categorical distinction in nature), and often make bad inferences about how mental illness works as a result of this confusion.
The linked article seems like it might just be making this point again, but without reading past the paywall I can't say whether it might also have some more novel stuff to say.
It is a book review. It is not presented for the truth of the matter. I presented it as notice of a new book that might interest Our Fearless Leader and some scientists or physicians in the audience.
It is a subject that I have no understanding of. Although my wife was a student of Alan Frances who was the reporter for DSM4 and rebuked the authors of DSM5. I once meet him socially. Lovely fellow. My wife has never expressed any views on the subject to me.
As for the paywall. That is WSJ.com's doing. Many people do subscribe to the site as it is one of the most influential. Many people may have access through an institutional library.
On which timescale protein in a diet should be complete?
https://www.healthline.com/nutrition/complete-protein-for-vegans answered many of my questions (though if someone knows better resource - let me know!), but I am unsure whether I should care about complete protein each day? Each week? Each month?
I ask as I noticed that I started eating less and less meat recently AND I am prone to repeating eating the same food for days AND I hate tofu/soy products/chia/quinoa. So I am worried that I should worry about incomplete protein.
But is (for example) eating a lot of rice (and just rice for 5 days) and then eating a lot of beans for next three days balancing each other or not at such timescale?
Probably relatively short. The body does not store excess protein in the diet, the way it stores excess carbs or fats. Excess amino acids are just combusted and the nitrogen immediately excreted as urea, which is why the body cannot later construct amino acids from stored fat, the way it can construct new carbs or fats.
this is true, but this doesn’t apply to actual vegetarian diets because most staple foods have mostly complete amino acid profiles, so you’re fine so long as your diet is somewhat varied. that’s apparently the “expert consensus”. Reviews find that vegan diets aren’t any more deficient in specific amino acids than non vegan ones. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6893534/ https://www.ahajournals.org/doi/full/10.1161/01.cir.0000018905.97677.1f etc
The Wikipedia article has opinions on this. https://en.m.wikipedia.org/wiki/Protein_combining
In my opinion “just rice for five days” is a terrible idea if you mean literally only rice. But if it’s like rice for five days with a dozen other things, then beans for three with a dozen other things, maybe that’s fine. You’d probably feel a bit off of it was causing problems. But I have no clue.
People have been fine after much worse diets than just rice for five days. There's a guy who gave up food for Lent and lived on beer alone! https://www.youtube.com/watch?v=h9EEghd_TFg
Or the guy who lived on potatoes for a year https://www.today.com/health/spud-fit-man-loses-weight-eating-only-potatoes-year-t106144 , and he was fine
I’d still suggest one eat some fruit or nuts or whatever else along with it
This study (found as wiki citation) was one of the original series demonstrating essential amino acids were important
https://www.jbc.org/article/S0021-9258(18)50916-9/pdf
On page 4 one can see a figure showing the results of essential amino acid removal from diet. It seems effects set in quickly within four days. But that’s with total removal, something unlikely in any real diet. So I doubt this has real relevance to your question, but it’s cool!
> On the 4th day of isoleucine deprivation, the output of nitro- gen exceeded the intake by 3.79 and 3.90 gm. respectively. Both young men complained bitterly of a complete loss of appetite. Each experienced great difficulty in consuming his meals. Furthermore, the symptoms of nervousness, exhaustion, and dizziness, which are encountered to a greater or lesser extent whenever human subjects are deprived of an essential amino acid, were exaggerated to a degree not observed before or since in other types of dietary deficiencies, It became evident by the end of the 4th day that the subjects were approaching the limit of their endurance, and that the missing amino acid must be returned to the food without further delay.
I second this question. As a sometimes vegan, I've gotten kind of religious about making sure I get complete proteins with each meal, minimum on a daily basis. Would love to find out that I can be more lax about it.
also, @traveling_through_time, have you tried marinating tofu and searing/baking it? Something like this: https://www.noracooks.com/marinated-tofu/ Unmarinated tofu is the literal worst, good marinated tofu rivals the occasional steak I have.
Also, seitan can give pretty good mouth feels, and has a pretty mild flavor so you can make it taste like most anything. It's not a complete protein, so sometimes I do like ground seitan in black beans to get a sort of ground beef filling for tacos.
Though I will try "searing/baking it" - I am not sure whether I tried this variant...
I tried tofu some times, and even when everyone praised it as very tasty I needed to force myself to eat it.
My conclusion so far was that tofu is simply one of few foods that I really dislike. And I am not too picky "eating a lot of rice (and just rice for 5 days)" is something that actually happened.
Fair. I've been trying to make myself like tomatoes for the last 15 years, no dice yet.
The marinade is important largely to let the salt permeate the tofu, otherwise it tastes really dull. If you bake it (my recommendation), the "trick" is to bake it in 1/4 inch slices until its browned-almost-burnt, but not burnt, gotta flip once or twice. It fixes the texture problem giving it a better mouth feel, and I imagine does some chemistry shit that makes the falvors come out.
The real problem IMO with _not_ using tofu or seitan (or nuts, I guess) is that I don't see how to get a decent macro balance - I always end up super carb heavy. The macros of e.g. straight chickpeas are just about perfect, but you combine them with something to make it a complete protein and your carbs go off the chart. Beans and rice, same deal.
I enjoy chickpea + edamame. Quinoa is also a good base for vegetable protein.
Re: Tomatoes: Have you given dicing them a try? You might find that better than the no dice method.
... I'll see myself out.
Substack is reverting comments to "New First" every time I load a new article. It didn't do this for a while (not sure whether it reverted to chronological or was sticky). Is this an intentional change?
This is my fault. I used to remember to make Open Threads "new first", forgot for a while, and then recently remembered again.
If it's happening for threads other than Open Threads, it's a problem.
Okay, understood. Haven't seen it in non-Open threads.
I think "New First" is the default for Open Threads and "Chronological" for other posts. (I'm also think I remember a community discussion where this emerged as the consensus, so pretty sure its intentional.)
Don't know, but at least for the open thread I kind of prefer it.
It means the comments made during the later days of the week aren't completely invisible.
+1
Any industrial engineers in here, particularly with experience in safety engineering? I'm wondering if the discipline covers such things as individuals taking initiative to prevent disaster. It seems there are disasters (Challenger, Chernobyl) that could have been prevented by a well placed individual taking actions that would have seemed excessive to individuals who thought there was no danger.
I do mission assurance for space flight, mostly but not exclusively unmanned. This strongly overlaps safety engineering, in that a billion-dollar rocket and payload exploding is a Very Bad Thing whether there are people on board or not. And in that there are many people involved in the process of making sure things go as planned.
And if it's important, it's too important to be left to individual initiative. Individual performance varies too wildly to be counted on; you need a process where if some people screw up and everybody else just does their job by the numbers, the people just doing their job by the numbers will catch the screwups before anybody gets killed and/or any rocket explodes. Sometimes that doesn't happen, in which case it would be nice if someone exercising extraordinary initiative were to somehow fix the problem. And sometimes that part *does* happen, which is great when it does. But you can't count on it, and it can't be part of your plan if it matters.
Also, too *much* initiative sometimes means that where you once had a process that would have worked, now you have a dozen undocumented variations made by people who thought they found a way to make it "better" but who collectively opened a gap for failure.
I'm also a pilot, and in that realm we all understand that sometimes it will come down to one or two people having to fix the problem unaided and on short notice. If that happens, initiative matters and pilots are expected to exercise it. And are trained and tested to ensure they have the skill, knowledge, and judgement to exercise their initiative properly. But even then, we start with checklists and procedures designed to reduce as much as possible the slight chance that a pilot might have to exercise initiative.
Initiative is better suited to accomplishing great things, than to avoiding great catastrophes.
Thank you for your insight.
I also hear, however, that practically all organizations that undergo this process of becoming High-Reliability Organizations (you appear to be describing the processes of one), only undergo this transformation in the aftermath of catastrophe.
Is there any advice on how to transform an organization into an HRO, when the organization does not think its actions can lead to disaster?
An easier way to do it would be to selectively hire people from other HROs and build in a very conscientious way. Essentially piggy-backing on previous catastrophe without experiencing it yourself as an organization.
I think you can create such an organization in other ways, which would be much harder and less complete. What that requires is a founder that is very much invested in making an HRO, and emphasizing that throughout the organization's history. They would hire people with specific functions and goals, write policies that further those specific goals, and incentivize those goals. You couldn't reward success if it meant incentivizing non-HRO behavior. The major problem with that is new companies tend to succeed by being nimble and making fast changes. HROs succeed by making slow decisions that are carefully thought out. In order for an HRO to survive being created, it needs very strong financial backing and patient lenders/owners. Amazon (not necessarily an HRO though), to a large extent, seems to have had that. Jeff Bezos was willing to forego personal wealth and cashing out over a pretty long period of time, and kept the money going back into the company. He did that for eventual super money and that worked out well for him. A company trying to do that for reliability may have a harder time getting there. That said, SpaceX and the other private commercial space companies seem to be acting as HROs without personally experiencing major catastrophe. Of course, they have had a series of smaller catastrophes and a lot of their workforce came from existing HROs, so that's ground I may have already covered and not a new thing.
>That said, SpaceX and the other private commercial space companies seem to be acting as HROs without personally experiencing major catastrophe.
SpaceX hired a lot of people who would have been right at home in the "move fast and break things" culture of Silicon Valley, but as its president and COO it went with Gwynne Shotwell, a veteran of one of the oldest and most highly regarded HROs in the business. Disclaimer: it's the same HRO I now work for.
The other private space companies haven't done enough space flight for a lack of catastrophes to be really telling. And some of them have had more than their share of catastrophes before ever reaching space.
Specifically, I'm trying to figure out what it would take to have AI research conducted in a safe way. It sounds like pretty much all AI research should occur in an HRO context, but the puzzle is to figure out how to do that.
I'm also reading Engineering a Safer World on the recommendation of another commenter, which says the HRO paradigm is obsolete, but that hardly matters to my purposes: the idea here is to figure out how to conduct AI research safely, not to wed to a specific paradigm. I suspect the ideas in Engineering a Safer World can be applied to the AI community, but we shall see.
Your idea of getting people with HRO experience into this is probably worthwhile.
Good luck trying to figure out how to make the Wright Brothers into an HRO when no one has ever seen a plane crash before.
What the AI researchers are doing right now is not good enough. Something else needs to be attempted, and we have managed to do novel stuff safely before. It's one thing if there had been something equivalent to the Asilomar conference on recombinant DNA, but the one that happened on AI really was not the same at all.
Lay off all but your very best people, sell everything including the buildings at auction, and start over someplace new?
Unfortunately, I don't know of any recipe for turning a non-HRO into an HRO without a catastrophe in the middle. Nor can I think of any examples offhand, except by building a small HRO in an isolated corner of a larger organization, but in that case I don't know of any (non-catastrophic) way to pull the HRO-ness into the main organization. Preemptive institutional reform is a hard problem.
I'm fortunate enough to work for The Aerospace Corporation, which was created for the specific purpose of being an HRO and which was done right. That's not too difficult to do if you care, and there are plenty of examples to work from.
I would say that if individuals taking unusual initiative risky to the careers is necessary to prevent disaster, then it is a clear evidence that management, processes and safety engineering catastrophically failed.
(in software engineering it is often stated that system where someone can make a mistake and cause outage/failure/catastrophe is a system that is inherently unsafe, as mistakes are normal)
Well, if we're doing safety-in-depth (we should), individuals should be prepared to make such decisions. There is no particular reason that management, processes and safety engineering, should be the only layers of defense in the system, though of course, ideally they should also be there.
Oh definitely. And there are numerous cases where it worked (see Stanislav Petrov https://en.wikipedia.org/wiki/Stanislav_Petrov ).
But where people sacrificed (or would need to sacrifice) their careers to stop catastrophe is system that would block/eliminate useful well placed individuals that you mentioned.
I don't see how. Even if the RBMK reactor had been much better designed, and had a containment structure, some bizarre accident could still have happened, leading to the same situation of an irresponsible leader (Dyatlov) giving insane orders. Though of course, the scenario becomes much less likely, especially if all the people involved are very safety minded.
For many systems, it does not seem possible to remove humans out of the loop entirely.
Sure, but in a safer (Western style) reactor, the worst case scenario from following insane orders is a ordinary meltdown a la TMI that will not spew radiation into the environment. In a molten salt reactor, it's probably impossible to cause damage by doing something in the control room (it seems to me that the worst thing you possibly do in a molten salt reactor is add too much fuel. But even that shouldn't be dangerous if the reactor shuts down automatically on overheat)
It's good that it's possible to design safe reactors. But the broader conversation here is about the role of individual initiative in preventing disaster, which in my view, includes such things as ensuring we're not working toward Chernobyl-style blunders.
How do we know we're not currently building our very own RBMK-reactor-style deathtrap, but in another domain?
I was mostly thinking about social design (Glorious Leadership of Party is never wrong, wrongthink can send you to a gulag, Lysenkoism-type mess in science, no decent alternatives at all to party controlled jobs, official approval of various kinds necessary to get anything from meat to radio to apartment).
I am not expecting idiot-proof reactor that can be operated by children, but lying about defects, concealing failures, leadership ignoring facts, superpowerful leadership, no recourse where said leadership wants you gone and so on are a serious problem.
And not-so-bad reactor design would help significantly. There were many footguns, one of the more well known was that emergency scramming of reactor initially produced power spike - the exact opposite of desired or expected effect [1].
And the response after is hard to even parody (though this Netflix pseudocumentary managed to do this anyway).
[1] from https://en.wikipedia.org/wiki/Chernobyl_disaster
> Consequently, injecting a control rod downward into the reactor in a scram initially displaced (neutron-absorbing) water in the lower portion of the reactor with (neutron-moderating) graphite. Thus, an emergency scram initially increased the reaction rate in the lower part of the core.[7]:4 This behaviour had been discovered when the initial insertion of control rods in another RBMK reactor at Ignalina Nuclear Power Plant in 1983 induced a power spike. Procedural countermeasures were not implemented in response to Ignalina. The UKAEA investigative report INSAG-7 later stated, "Apparently, there was a widespread view that the conditions under which the positive scram effect would be important would never occur. However, they did appear in almost every detail in the course of the actions leading to the (Chernobyl) accident."[7]:13
Netflix pseudo-documentary? You mean HBO's Chernobyl? Was it really a cartoonish depiction, and if so, how? I know about the woman scientist being an invented character, there not being a big dark plume, or trees surrounding browning practically immediately, but not anything that would make me conclude that the whole thing is essentially a lie.
I would recommend looking into the study of high reliability organizations (https://en.wikipedia.org/wiki/High_reliability_organization), which looks at the organizational factors that allow groups that would otherwise be considered high risk to operate without major failures or disasters.
Interesting stuff. While the AI research community lacks some of the features of HROs (compressed time factors, high frequency immediate feedback) they need to apply a lot more stuff from them (high accountability, operating in unforgiving social and political environments, an organization-wide sense of vulnerability, widely distributed sense of responsibility and accountability for reliability, concern about misperception, misconception and misunderstanding that is generalized across a wide set of tasks, operations, and assumptions, pessimism about possible failures, redundancy and a variety of checks and counter checks as a precaution against potential mistakes are more distinctive).
> operating in unforgiving social and political environments,
Sounds like the tech industry to a T.
Please, I'm in tech. We're not engineers for the most part. Even at the highest levels, where people work on stuff that's supposed to be highly reliable, they're not working with the mindset of an engineer designing a bridge, much less an engineer working on a life-critical system.
Check out Nancy Leveson, especially her book "Engineering a Safer World". She writes a lot about common misconceptions about operators and safety engineering. One of her claim is that accident investigations often end up unfairly blaming the operators, who are almost always doing what they think is right based on their model of the system. The book describes strategies for designing systems to help operators make good internal models.
On your point, I think she would disagree. As I said, operators (and people in general) does what they think are safe. Accidents often lack a single cause, and systems migrate towards unsafety unless effort and design is put in to fight back (compare with entropy). Chernobyl had plenty of problems and if someone had heroically prevented the accident, all the problems would have remained for the future. What is needed is strong leadership and strong safety culture trough through the organisation (which is hard but not impossible).
It's true that in Chernobyl, if the technicians had forcibly thrown Dyatlov out of the room, it might have prevented the disaster that night, but then, they would have had no way of ensuring the same mess did not happen in the future.
But what about the Challenger? Roger Boisjoly, unlike the technicians at Chernobyl, knew exactly what was wrong with the Challenger, but the management at his consultancy and at NASA was reckless. Was it really impossible for him to save the Challenger?
And thanks for the recommendation.
Challanger had the same problem. Management wanted to fly, information that prevented flying never travelled far up the hierarchy since no-one wanted to bring up bad news. Disaster could theoretically be pushed a few years into the future (as with Chernobyl), but the fundamental problem wouldn'tbe fixed.
Also, no-one was "reckless". Management was not
NASA management was the most reckless. Morton Thiokol's management did see the O-ring problem that Biosjoly and other engineers detected, and the night before the launch, actually recommended no-go on it, but NASA management got pissed and pressured them to say go (apparently a highly unusual action), and Thiokol management relented.
So we have NASA management being reckless, and Morton Thiokol management and engineering being spineless as the final causes of the Challenger disaster.
Sure, ideally systems are such that a situation where someone needs to grow a spine does not arise, but it does seem to me that someone growing a spine would have prevented the Challenger disaster.
> would have prevented the Challenger disaster.
Likely, but that would just postpone it to another launch.
In the USSR yes, but I don't think the US is so dysfunctional that it can only learn certain lessons in the aftermath of disaster. Forceful action could have escalated the problem as far up the ladder as needed.
Management was not told that the problem was important, because the system was set up to not bring them bad news.
Seems more a question for social or organizational psychology.
Anybody got a covid vaccine effectiveness update? I'm trying to convince some hesitant friends of mine that are trying to get a dr's note, even though they are in risky careers (Nurse and Fireman).
I can point you to the WHO's latest weekly epidemiological update: https://www.who.int/docs/default-source/coronaviruse/situation-reports/20210810_weekly_epi_update_52.pdf?sfvrsn=8ae11f92_3&download=true
"results from an ongoing randomized clinical trial evaluating the 6-month efficacy of
Pfizer BioNTech-Comirnaty against SARS-CoV-2 infection (symptomatic + asymptomatic) in persons ≥ 12
years old reports an overall vaccine efficacy against infection and against severe disease ≥7 days post second
dose of 91% (95% CI: 89.0-93.2%) and 96.7% (95% CI: 80.3-99.9%), respectively, across 152 participating sites
in 6 countries. The authors also estimated VE against the Beta variant in South Africa and found 2 doses of
Pfizer BioNTech-Comirnaty prevented 100% (95% CI: 53.5-100.0%) of SARS-CoV-2 infections ≥7 days post
second dose, though confidence intervals are wide. 32 These results have not yet been peer-reviewed"
The report goes on to describe finding of several other recent studies, several of which have not yet been peer reviewed.
Regarding the meetups, are you going to contact people and give them a chance to sort their plans out before posting all of them? I just put in a placeholder place and time, since I don't expect to have to be the one to host anyway, but just in case.
No, I wasn't planning on contacting all 200 (!) of the people who volunteered. I hope most people entered true information, if not I'll try to solve it after the fact.
Would I be able to ask the other people who would come when and where would work for them, or something like that?
Did you “Lizardman Constant” the meetup survey?
Was the placeholder information obviously (as in, ridiculously, impossibly impractical) a placeholder?
If you said “the middle of the city at 12am”, I would not be shocked to see someone looking confused at the default pin location for your city on google maps at midnight, given a large enough population.
I don't remember what I put. I just wanted to say I would be willing to host if absolutely nobody else would be willing to. Is there a way to view or edit my response?
I'm going to see a psychiatrist for the first time, and since appointments are very expensive and hard to come by where I live, I want to try to get the most out of it. Was hoping people here might have some advice. I've written a summary of the situation and then some of my questions at the end.
Basically I'm concerned I might have undiagnosed ADD. When I was a kid, many teachers and (I think but not sure) my GP suggested I might have ADD based on my behavior, but my parents were pretty ideologically opposed to ADD diagnosis of children in general and preferred the explanation that I was a gifted child who was bored.
My whole life I've had a pretty typical list of symptoms - I struggle to concentrate on tasks and I always feel scattered and lost among different threads of thought and attention. If I do concentrate I hyperfocus and I have to whip myself up into a highly stressed state to be able to do it, I always do all my work at the last minute in a state of panic, I struggle with simple life admin tasks. I'm very restless and I'm always jiggling and moving or I catch myself jumping up and moving around for no reason. As a kid I had a lot of social difficulties but over time I've learnt to mask these pretty well. I acted up terribly in school when I was younger but calmed down as a teenager. I watch other people sit down and do a few hours of gently focused productive work without distraction and I can't imagine ever being able to do that. I've also suffered from (and been diagnosed with and treated for) depression during several periods since I left school, but I was also pretty depressed as a teenager.
I'm worried that a psychiatrist will dismiss this because I did very well at school and at university. I suspect that having very high intelligence has concealed the issues with my attention, and I think I'm actually massively underachieving. Consequently I've worked in jobs that haven't been very challenging or interesting, and I've become bored and left after a couple of years in each one. The last job, though, required a lot of 'self starting' planning and focused work on tasks that weren't just given to me on a proverbial conveyor belt, and I struggled hard and often spent hours just staring at my screen panicking about what to do next, waiting for an email to react to.
Now I'm self-employed and making a modest living, largely enabled by a partner who earns much more than I do. I'm not doing anywhere near the requisite amount of focused work to make my business grow and thrive. If the next decade passes like the one we just had, I'm going to be extremely unhappy, and either very poor and single or largely dependent on my partner. My depression will come back. I DO have the skills, knowledge and ability to do really well at what I do, what I need is just to be able to sit down and do good, focused work for hours a day, every day, without having to engineer emergencies that ramp up my anxiety to unsustainable levels. I drink ridiculous amounts of coffee and find I get a good productive couple of hours after that (like right now.)
So questions:
1. I really want to try medication for this - probably Ritalin. Many years ago when I was at university I tried dexamphetamine (in irresponsibly large recreational doses) on about five occasions and found it really enjoyable for meditating and reading. I stopped experimenting with it when I noticed I felt a compulsion to take more, and never took it again since. I remember the feeling of being on dex as having my attention turn into a spotlight I could direct at will wherever I wanted and hold it there easily, and it was very relaxing. I think to the psychiatrist I will seem to know a suspicious amount about these medications and I'm worried he might take me for a drug seeker - although this is a pretty perverse bind to be in, because I am indeed seeking the drug that is proven to be effective against the condition I think I have. How do I talk about these medications and the research I've done on them without sounding like I'm just trying to get a script to get high? Should I not mention previous recreational use from ten years ago?
2. How do I persuade the psychiatrist that I'm actually underachieving, and that I'm not simply regularly achieving and beating myself up unnecessarily? I think this will be easier than before, since I'm much more precariously employed so my life looks worse on paper than it did when I had a steady job. How do I explain how serious my situation is?
3. How do I explain that I've already tried so many different methods like meditating, productivity software, GTD, scheduling my day in 15 minute chunks the night before, and I just can't seem to finish any complex, multi-stage projects? What do I do if the psych says 'thanks for waiting three months and giving me all that money, now go away and try writing down your goals for the day the night before'?
Regarding 1., read Scott's post on Adderall and Ritalin if you haven't already done so, especially the first part about the psychiatrist's perspective: https://slatestarcodex.com/2017/12/28/adderall-risks-much-more-than-you-wanted-to-know/
In my personal experience, it will have more to do with the psychiatrist than it does with you. Some are skeptical, others hand out Adderral like candy.
Relevant SSC: https://slatestarcodex.com/2017/12/28/adderall-risks-much-more-than-you-wanted-to-know/
Poor focus, low motivation, depression and other cognitive effects are also symptoms of low testosterone. How likely this is relative to ADD depends on other lifestyle factors, like your body composition, diet, whether you lift weights, etc. Don't be too convinced by a single diagnosis right off the bat, and don't think cognitive symptoms necessarily means it's purely a cognitive problem. Hormone tests are easy, so why not cross out that possibility right out of the gate.
That's interesting. Is low testosterone something that can be corrected, either through treatment or some other fix? I have seen many "hormone balancing" advertisements, mostly it seems for weight loss, so I am pretty skeptical about how well those things work. Never heard of the hormone thing changing other mental aspects.
It's ultimately about having the right balance of testosterone, SHBG, estrogen, etc. Natural ways to improve hormone levels, in rough order of significance: get sufficient quality sleep, lose body fat, add muscle mass, dietary changes (less sugars, more fiber, more protein, more cruciferous vegetables).
Testosterone levels also decrease 1-5% per year after 30 as well, so aging is also a factor that will exacerbate any pre-existing imbalance. Maybe crotchety old men mostly just have low T.
There are of course more direct interventions like TRT, which come as skin patches, gells and injections, but obviously healthy lifestyle changes are preferable because they have all sorts of synergistic effects.
Thanks for the info, that adds a lot to think about.
I could have written this myself! So many things in common. Deadline anxiety as the only reliable motivator. High intelligence allowing for close-to-zero effort graduation. Occasional inhumane bouts of very deep concentration (as a kid, I have sometimes gone so deep to not respond to people talking to me, earning the remark "he is in trance again"). Except for the economic outcome: I have somehow became a well paid executive, earned money, invested well and gained total independence. Now, though, the ADD problem is even bigger: it is very difficult to motivate myself to do anything productive when I really don't have to. And there is only so much satisfaction you can get from sport and hobbies. Overall life feels rather good though.
I have been exactly where you are. I thought at first me wife was posting using a version of my story to get advice for me…
Chin up, in my experience getting psychs to not prescribe you aderol is more difficult than getting it. There is also apparently a type of ADD that presents as “brain does fine on tough stuff, but can’t be bothered to even call it noon easy stuff.” That’s what I was diagnosed with, so don’t sweat over achieving; it’s apparently not uncommon.
That said, I am not sure how much Aderol actually helps. Seems to a bit, but much like coffee for me it doesn’t do much if I take it regularly. I have yet to take enough to be able to work like a normal person, much less my over achiever wife, largely because I am scared of how much that might be and whether or not I would be able to stop.
Best of luck to you, and let us/me know how it goes.
I agree with Friv Jones that you are overrating how suspicious the psychiatrist is likely to be of your intentions. Many psychiatrists mostly prescribe meds and believe in them as useful change agents. Getting meds out of them isn't like getting admitted to Harvard Law School! If you encounter any problem in getting a script for ritalin or similar, I think it would likely take the form of the psychiatrist being in favor of your trying a different class of drugs, such as an antidepressant, before you try some form of speed. (And that might indeed be worth trying, especially if you have not tried an antidepressant before.)
The likelihood is that if you go in asking for an ADD med you will get one. Here are 2 things to consider before you go try to do that:
-There are a lot of things other than simple brain-not-working-right ADD that can cause somebody to have trouble with attentiveness, follow-through on plans and executive function in general: Preoccupation with matters other than task at hand; fear of failure at task; lack of appetite for task; dislike of task-setter; chronic pain; misery; anxiety. Productivity software and other surface fixes are not likely to be helpful if something of this nature is what's wrong.
-Taking adderall or similar is not going to tell you whether you truly have brain-not-working-right ADD, because you are pretty much guaranteed to feel better after you take the tablet: more optimistic, more energetic, more able to focus. So if a prescribed upper makes you feel that way, all you're going to know is that you're typical. Over the next year you'll probably be able to tell whether the drug is fixing whatever's wrong (or at least whether taking the drug is worth it): If on the drug your life changes in a big way -- if big trends change, if you're able to meet some long term goals -- then yeah, drug's probably making a difference. If it's more that the drug reliably makes you perkier, but you're not really advancing and feeling a lot more satisfied with how you're using your time -- then it's just basically a coffee habit, only worse for you.
I have sought and obtained scripts for ADD drugs at several disparate times in my life, -- meaning I have had a lot of first meetings with new people to whom I must explain my situation.
I think you are overrating how suspicious your psychiatrist will be of your intentions. Your ambition to get a prescription to that specific drug is maybe a little narrow -- but hopefully your psychiatrist will expose you to other (similarly effective) options as a part of the discussion. I also think you are overrating the likelihood that your psychiatrist will attempt to evaluate how much you achieve. They will not ask to see your resume or your grades, for example. I don't expect that they would try to run you through, like, coaching techniques like journaling.
I don’t have any advice. I’m just chiming in to say I feel like I could have written many passages in here. I’ll have spurts where I can focus, but then it feels like weeks go by where I’m lucky if I’m productive for one hour per day. I know I’m worse when I’ve been drinking in the previous couple days or my sleep is bad. I’ve reals all the books on focus, concentration and productivity too. Often a good one will inspire me for a week or too, but sooner or later I end up back in the same rut.
Read*
I'm very interested in answers to this. I'm about one step before this person in the process (ie haven't yet sought a referral to a psychiatrist) but the backstory and concerns are otherwise basically identical.
How soon is the appt?
At the end of October
I'd be open to a meetup in Taipei, but the situation here is not really like other places. Low vaccine access, but also close to zero cases thanks to entry quarantines and testing. I'd feel very safe, but not sure that's universal.
Any interest (or opposition to this happening at all) here?
Scott, can I send you a short screen capture of a substack comment bug/misbehaviour via youtube?
I'd simply post it here, but that puts my real name in front of everyone :p
Would you feel comfortable emailing it to me, either the link or just as a video file? <my username> + <the letter n> at gmail dot com. I don't work at substack but I maintain an extension that makes the experience more bearable.
I neglected to mention that it's primarily a mobile bug. Unless you've got a custom substack app (and even then, I hate bespoke apps :D)
There's not much I can do besides forward it to the Substack team, so you might as well contact them directly.
my priors say you forwarding it will get a lot more traction compared to pycea contacting them directly
I've been trying to find a plot of the frequency of extreme weather events over time to see if they're becoming more common with climate change. However, it's frustratingly hard to find. The best I could do was this graph from the Met Office: https://www.metoffice.gov.uk/weather/climate/climate-and-extreme-weather
...but it doesn't link to any paper, nor does it explain its methodology. Does anyone know of a highly regarded paper, preferably a review paper or meta-analysis, that shows whether or not extreme weather events have become more frequent with time?
Try industry reports in the property and casualty insurance sector that cover catastrophic losses. For instance, see here p 17: http://assets.ibc.ca/Documents/Facts%20Book/Facts_Book/2020/IBC-2020-Facts-section-one.pdf
If you have the time needed, what about hunting through the references of IPCC reports?
Hi. Based on your replies here, I note that you don't want commentary / projections, just studies of historical data. I try to accommodate that below: I've limited it to a maximum of three references per extreme weather event type, as you indicate below that you don't want too many papers. Let me know if this hits the mark for what you are requesting, and please clarify if it doesn't.
Temperature extremes:
Dunn, R. J. H., Alexander, L. V., Donat, M. G., Zhang, X., Bador, M., Herold, N., et al. (2020). Development of an Updated Global Land In Situ-Based Data Set of Temperature and Precipitation Extremes: HadEX3. J. Geophys. Res. Atmos. 125. doi:10.1029/2019JD032263
Alexander, L. V. (2016). Global observed long-term changes in temperature and precipitation extremes: A review of progress and limitations in IPCC assessments and beyond. Weather Clim. Extrem. 11, 4–16. doi:10.1016/J.WACE.2015.10.007
Zhang, P., Ren, G., Xu, Y., Wang, X. L., Qin, Y., Sun, X., et al. (2019c). Observed Changes in Extreme Temperature over the Global Land Based on a Newly Developed Station Daily Dataset. J. Clim. 32, 8489–8509. doi:10.1175/JCLI-D-18-0733.1.
Heavy Rains:
Zhang, W., and Zhou, T. (2019). Significant Increases in Extreme Precipitation and the Associations with Global Warming over the Global Land Monsoon Regions. J. Clim. 32, 8465–8488. doi:10.1175/JCLI-D-18-0662.1.
Sun, Q., Zhang, X., Zwiers, F., Westra, S., and Alexander, L. V (2020). A global, continental and regional analysis of changes in extreme precipitation. J. Clim., 1–52. doi:10.1175/JCLI-D-19-0892.1.
and the Dunn et al study above.
Floods:
Do, H. X., Westra, S., and Leonard, M. (2017). A global-scale investigation of trends in annual maximum streamflow. J. Hydrol. 552, 28–43. doi:10.1016/j.jhydrol.2017.06.015
Gudmundsson, L., Boulange, J., Do, H. X., Gosling, S. N., Grillakis, M. G., Koutroulis, A. G., et al. (2021). Globally observed trends in mean and extreme river flow attribution to climate change. Science (80-. ). doi:10.1126/science.aba3996.
Droughts:
I'm going to be honest, the literature here is too complex to grab a few papers, given that there are different types of droughts. I omit this due to not knowing what to present to you, rather than due to a lack of research (almost the opposite problem!)
Extreme Storms:
- Tropical Cyclones
Kossin, J. P., Knapp, K. R., Olander, T. L., and Velden, C. S. (2020). Global increase in major tropical cyclone exceedance probability over the past four decades. Proc. Natl. Acad. Sci. 117, 11975–11980. doi:10.1073/pnas.1920849117
^Note the published correction for this as well.
- Extra-tropical storms
Wang, X. L., Feng, Y., Chan, R., and Isaac, V. (2016). Inter-comparison of extra-tropical cyclone activity in nine reanalysis datasets. Atmos. Res. doi:10.1016/j.atmosres.2016.06.010.
- tornado (US)
Gensini, V. A., and Brooks, H. E. (2018). Spatial trends in United States tornado frequency. npj Clim. Atmos. Sci. 1, 38. doi:10.1038/s41612-018-0048-2.
Thank you! This is great. I've skimmed through most of the papers. The most striking findings include:
Extremes:
Increase in number of heavy precipitation days by 2 days (1900 to present)
Rx1day (max precipitation in 1 day) increased by several mm from 1900 to present
TX90p (% days when daily max temp > 90th percentile) increased by ~30% from 1900 to present
TXx (maximum Tmax) increasing by 0.13 C/decade from 1951 to 2015, with TNn (minimum Tmin) increasing by 0.4 C/decade.
Heavy Rains:
In monsoon regions, an increase in annual maximum precipitation by about 10% per K of warming (Zhang & Zhou 2019)
"The global median sensitivity, percentage change in extreme precipitation per 1 K increase in GMST is 6.6% (5.1% to 8.2%; 5%–95% confidence interval) for Rx1day and is slightly smaller at 5.7% (5.0% to 8.0%) for Rx5day." (Sun et al 2020)
Floods: Many sites with both statistically significant increasing trends and decreasing trends in magnitude of floods, from 1900 to 2014 (Do et al 2017). If the start point is 1955 or 1964, the dominance of increasing trends is very pronounced.
Tropical Cyclones: from 1979 to 2017, "the major TC exceedance probability increases by about 8% per decade, with a 95% CI of 2 to 15% per decade." (See also Scott Alexander's post on this, below)
Tornados: in US, flat national trendline from 1979 to 2016, but increasing or decreasing trends in different regions
Some of your dates are from 1900, while others have much later starting points. Was there a purpose or necessity of the later starting dates? For instance, I have heard that the 1940s were unusually cool, so the TXx going from 1951 to 2015 might be picked for a good reason, but might have been picked to show a higher effect. With floods that's more clearly so, as you note that by choosing a much later date than 1900, the effect is more pronounced.
Glad to help, and thanks for reporting back what you learned.
One of the key things to keep in mind is that prevalence / intensity can be important to understand, but ecosystem (and human) responsiveness and adaptation is non-linear. The plants and animals (and people) of monsoon regions might be able to adapt to a 20% increase in annual precipitation, but a 40% increase? ditto with other extremes.
It's not clear what the claim "extreme weather events are becoming more (or less) frequent means." Hot summers are becoming more frequent, cold winters less frequent. How do you decide how not or cold something must be to qualify, in order to have a count of each to add together? Similarly, at least one hurricane expert (Chris Landsea) suggested that hurricanes would be getting a little less frequent but a little more powerful as a result of climate change. Does that count as extreme events getting more common or less?
We don't have to get into those semantic debates. If you know of a good paper that separately studies the frequencies of hot and cold winters, or of the frequency and power of hurricanes, I'd love to read it.
On hurricanes, see "Global Warming and Hurricanes: An Overview of Current Research Results" https://www.gfdl.noaa.gov/global-warming-and-hurricanes/
Choice quote: "Tropical cyclone intensities globally are projected to increase (medium to high confidence) on average (by 1 to 10% according to model projections for a 2 degree Celsius global warming). ... most modeling studies project a decrease (or little change) in the global frequency of all tropical cyclones combined."
I don't, although there are some figures on total hurricane energy at http://climatlas.com/tropical/
You can find links to heat vs cold deaths at:
https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(15)60897-2/fulltext
I don't have a source for frequency of hot or cold winters, and it isn't obvious how hot or cold should count as "extreme."
My point was that the statement "extreme events are becoming more frequent due to climate change" was probably meaningless, which suggested that people who say it are either not thinking clearly or being deliberately misleading.
I can't do your research for you, but I suggest using "https://www.google.com/advanced_search".
If you search for words like "hurricane" or "drought" or "extreme" together with "frequency" and choose a site of interest such as "Judithcurry.com" or "wattsupwiththat.com" or "rogerpielkejr.com" or "realclimate.org" you will find much more information on the subject, often including references to data sources and methodology.
And Dr Ryan Maui has been collecting global cyclone data back the 80s which he's turned into some excellent graphs. Because his data hasn't reinforced the predictions of (some) of the climate modelers that extreme weather should be getting worse, he's been accused of being in the global warming denialist camp. But he's always declared himself to be firmly in the AGW camp. He's said that the data should speak for itself...
http://climatlas.com/tropical/
For US tornados, NOAA has lots of data here... https://www.spc.noaa.gov/wcm/
And you can download all the data from 1950-2019 in a .csv format and plug it into a spreadsheet.
https://www.spc.noaa.gov/wcm/data/1950-2019_all_tornadoes.csv
I haven't looked at the data from the latest years, but the 1970s were the peak decade for tornados. And the 00s were very quiet. This lull continued into the first few years of the 10s, but I don't know if there was an upward trend the last half of the 10s nor what's happening now that we're in the 20s.
I downloaded the CSV and counted the number of tornadoes per year, assuming that each row represents one tornado. There seems to be a strong upwards trend since 1950: https://imgur.com/a/oLWM5n8
Of course, some or all of this could be due to improvements in tornado monitoring. I'd feel much more comfortable with a reputable scientific paper than with plotting datasets on the web.
Yeah, raw data is often misleading on climate change topics, due to confounders such as changes to technologies, methods and comprehensiveness of coverage over time.
That's the primary data that researchers use — NOAA satellite data, temperature gauge data, and sea-level gauge data. You're just looking at it unmediated. As a government agency they're obligated to put it all on line. The trouble is trying to find it because I wouldn't say its very well organized. Interesting, though, because this isn't what I saw when I graphed the data a few years back. Not sure if I sent you to a different dataset than I was looking at. Sorry if I steered you in the wrong direction, but AGW is not a subject that has interested me lately.
Has technology for detecting tornadoes changed in ways that would create a positive trend? If in 1970 detection was mostly by a spotter seeing a funnel cloud and in 2000 it was mostly by Doppler radar, you might get an increase without any increase in tornadoes.
Matthew Barnett collected some interesting figures here, one of them shows fewer hurricanes making landfall, another shows less droughts/ devastation by wildfires (human countermeasures would be a confound ) . Worth checking out, and references are provided https://m.facebook.com/permalink.php?story_fbid=1197111504049534&id=100012520874094
Maybe you've already seen it, but there was a good discussion in the comments here about some papers on whether hurricanes are getting worse - see eg https://astralcodexten.substack.com/p/highlights-from-the-comments-on-march
Interesting, thank you! Titanium Dragon's quote from NOAA is particularly interesting, as it basically says hurricane data was too noisy and too unreliable in the first half of the 20th century to say much:
"Existing records of past Atlantic tropical storm or hurricane numbers (1878 to present) in fact do show a pronounced upward trend, which is also correlated with rising SSTs (e.g., see blue curve in Fig. 4 or Vecchi and Knutson 2008). However, the density of reporting ship traffic over the Atlantic was relatively sparse during the early decades of this record, such that if storms from the modern era (post 1965) had hypothetically occurred during those earlier decades, a substantial number of storms would likely not have been directly observed by the ship-based “observing network of opportunity.” We find that, after adjusting for such an estimated number of missing storms, there remains just a small nominally positive upward trend in tropical storm occurrence from 1878-2006. Statistical tests indicate that this trend is not significantly distinguishable from zero (Figure 2). In addition, Landsea et al. (2010) note that the rising trend in Atlantic tropical storm counts is almost entirely due to increases in short-duration (<2 day) storms alone. Such short-lived storms were particularly likely to have been overlooked in the earlier parts of the record, as they would have had less opportunity for chance encounters with ship traffic."
Titanium Dragon also says the following, which is fascinating, but unsourced:
"We basically only have 50ish years of hurricane data, and for reasons we still don't understand, the 1960s-1980s were a particularly quiet era for hurricanes. Start drawing your line from that era, you'll see an upward trend - but if you go back to the late 19th and early 20th century you see a number of extremely active hurricane seasons. Indeed, 2005 isn't even the all-time leader for ACE (Accumulated Cyclone Energy) in a season - the winner is 1933, and that's probably an underestimate as it was from the pre-satellite era. 1893 and 1926 are #3 and #4, respectively, and again, are probably underestimates (especially 1893)."
Another thing to note about hurricanes is that *damage* from hurricanes is going to be much, much more noisy than the hurricane data (which is itself noisy already). It can be predicted about as reliably as predicting the take from an average hour at a casino. Because while we might have some idea how many hurricanes there will be, the number of hurricanes that make landfall is pretty random, and the extent to which they hit major cities (where the big damage happens) is more so.
Why do you think that is relevant? I think that looking for further scientific papers on the unfolding environmental disaster is a sign of analysis paralysis. Extreme weather events are but a minor part of the overall picture, which extends further than the climate.
I'm not asking for further scientific papers; I'm looking for one. I think it's relevant because I want to put some numbers to statements like "droughts are getting worse" or "hurricanes are more frequent". Are they 1% worse, or 10x? The answer is important to informing my understanding of the world and the policies I'd support.
Are you looking for one paper, or the holy grail called Truth? Opinions differ; "figures don't lie, but liars figure." All "numbers" relating to historical climate data are subject to interpretation. You can find papers saying extreme events are getting more frequent, and you can find papers saying the opposite. Finding "one paper" ought to be easy.
I'm after the holy grail called Truth, but I have to start somewhere. If it's easy to find "one paper", could you link to the paper you have the highest opinion of? This could mean the one you think is most robust, most reliable, most comprehensive, most even-handed, most interesting, or some combination of the above.
Finding "one paper" is easy because there are so many. Finding "the definitive paper," if there be one, is not. I've seen many reports on the issue and disputes thereon, but it's not my hobbyhorse so I've naturally not kept track of sources; but I'd rely more on Pielke Jr. than Holdren. Google for their joint names and follow the bread crumbs…
Even if neither droughts nor hurricanes were increasing, I see it as certain we should still be supporting some pretty radical policies. The Holocene Extinction (https://en.wikipedia.org/wiki/Holocene_extinction) is a mess, ocean acidification an even bigger mess (https://en.wikipedia.org/wiki/Ocean_acidification), and you will note that's without even getting to discussing the climate.
There is no point to getting a precise quantification of such a small feature of the overall issue, as it would be an exercise in missing the forest for the trees.
"The Holocene extinction includes the disappearance of large land animals known as megafauna, starting at the end of the last glacial period. " (from the piece you linked to).
Humans have certainly had a large effect on the world, but most of that has nothing to do with climate change. Falling ocean pH, on the other hand, is probably due to increasing atmospheric CO2, but it isn't clear what the effects will be. Calling it "acidification" is technically correct but misleading, since what is actually happening is the ocean becoming less basic, shifting in the direction of neutral pH. But that can still be a problem for organisms adapted to their current environment.
> is an ongoing extinction event of species
That was a very misleading quote you pulled, Friedman.
> most of that has nothing to do with climate change
Exactly my point, climate change is only a piece of the overall mess.
> Calling it "acidification" is technically correct but misleading, since what is actually happening is the ocean becoming less basic, shifting in the direction of neutral pH. But that can still be a problem for organisms adapted to their current environment.
Maybe if there are people who interpret it as "the ocean will burn our skin off!". And who knows maybe there are. But even then, this seems like a nitpick.
I think a lot of the rhetoric around climate change is deliberately misleading, and linked with a wildly exaggerated picture of the actual evidence. "Acidification" is one example. I might be mistaken, but my guess is that most people who hear the term do assume it means the oceans becoming acidic, and that that is part of the reason for using the term.
What is misleading about the quote? The way you originally put it, someone who didn't know what "Holocene" meant would assume it had something to do with AGW. That quote, from the piece you linked to, makes it clear that it is the whole time period during which humans were affecting things.
That's fine, feel free to start a new thread about the radical policies you'd like and I'd read it. I'm interested in "such a small feature of the overall issue", but if you're not, you don't have to discuss it.
You can proceed with your bikeshedding, but I will point out this is indeed bikeshedding.
I think it is relevant, as it's a big piece of the discussion around climate change.
Besides, I don't understand why you are talking about "analysis paralysis"- about what? Paralysis for what choice? It is an interesting question in and of itself, isn't it?
There is no point to further discussion on climate change and environmental degradation (it's not just the climate), other than to just say to skeptics and optimists "Comrade Dyatlov... I apologize, but what you're saying makes no sense" (https://youtu.be/rFYbe91tPJM?t=130)
It's crystal clear at this point a huge part of the problem is that Dyatlov is terminally normalcy biased, and so the only discussion to be had is by the lucid, and only about how to evict the Dyatlovs from the control room, because whatever the solution is, Dyatlov has shown he will not contribute to it.
> There is no point to further discussion on climate change and environmental degradation (it's not just the climate)
Sorry, but you are not sufficient authority to people in general obey what you decreed. High quality scientific research is far more convincing. At least to me.
And there is huge gulf between "highly annoying to humans, will devastate wildlife" to "broadly speaking, people are fucked" - and here, from what I know, situation is not entirely clear.
Don't listen to me, listen to the IPCC saying "code red for humanity". If that's not good enough for you, then it would appear that nothing could ever be.
I) I am actually doing some things, far, far more than even a typical activist (I am not delusional about my impacts, I know that they are small).
II) Strongly worded reports are not a pinnacle. Many things could be more convincing (though waiting for them would be a tragic mistake)
III) though has anyone manage to track down actual report? I am thoroughly uninterested in mangled media reports about it and https://www.google.com/search?hl=en&q=%22code%20red%22%20site%3Awww.ipcc.ch https://duckduckgo.com/?q=%22code+red%22+site%3Awww.ipcc.ch&t=ffab&ia=web failed to find it
Im not sure I follow exactly what you're talking about, but I think if you are just shutting down questions of the sort of "are there papers on X" where X is related to climate science, then you're doing a disservice.
What I am saying is that no further analysis is needed on the matter, the thing is bad. Though that's not a reason to pause research, further research about establishing "exactly how bad" is not needed. The real question is why we aren't fixing it, why we're so akratic.
> What I am saying is that no further analysis is needed on the matter, the thing is bad.
So mimi is correct, someone is asking for information and you're intentionally trying to shut down this inquiry because you think the inquiry itself is unnecessary or "wrong", as if you get to decide how people spend their time or how they research topics or what topics should be researched.
I'm honestly baffled by this mindset. I assume you think you're being helpful, but I can't see that working from the PoV of the person asking for information.
Huh, I recall a graph predicting global warming would be a net economic win for the northern US and Canada. And net bad for southern US and Mexico.
How much would you pay for a deliberately boring news feed? Not that it shies away covering important topics, but that its goal is to minimize your surprisal when you read someone else’s headline.
This exists and is called Reuters?
Maybe it’s my age, but 80% of the news has seemed pretty Capt. Obvious for the last half decade or so.
No, because you'd still be surprised at reading other people's headlines, and that was the criterion. Presumably to minimize surprise at reading other people's headlines, you'd have to report the truth before other people do, which in itself would be surprising and not boring.
Then let me ask a related question; would you pay for a news source that was the anti-clickbait? Say if they wrote all titles as clear declarations of fact instead of obfuscation or questions, they pared the articles until they were terse, and if it's appropriate explain why this seemingly weird outcome is just normal business wearing a funny hat?
Matt Levine does a good job of getting to the point and explaining finance and law so that future headlines give you that "yeah, duh" feeling that Josaphat was describing. But he's got a definite and fun style, and his column is relatively long, so that's not exactly what I'm trying to describe, but he's closer than most.
Good question. I'd certainly replace the newspaper I'm currently subscribed to, and whatever I'm paying them. How much higher I went would probably also depend on what they covered. (e.g. I put effort into getting news that American media seems to consider too foreign to bother covering.)
It would help a lot if their journalists appeared to be at least as numerate as a bright junior high school student ;-) (I hesitate to aim too high. After some of the local covid coverage, I suspect most journalists are nowhere near that numerate ;-() What I really want are journalists who understand _at least_ statistics-101-for-non-majors.
What are your sources for news the American media isn't covering?
Nothing too exotic - BBC, CBC, and Al Jazeera.
The CBC isn't a source for general world news - much of my family is still in Canada, so I'm looking for Canada in particular.
I hadn't thought about practicing my French or trying to stretch my probably inadequate German by reading their newspapers. I should have.
I don't know about DinoNerd, but I tend to read Le Monde or Le Figaro once a week - but that's as much to keep my French up to date as to actually read the news. They do have better coverage of (Francophone, West) Africa in particular, and also of internal EU politics, though that's about as important as the equivalent "who's up, who's down" in Washington.
If you do have a language to the level where you can read a serious (non-tabloid, non-dumbed-down) newspaper in that language, then I'd suggest doing so because it will definitely report things that don't appear in the mainstream English-speaking media.
I mean, it’s exactly what *I* want in a newspaper. Maybe other people do too.
For example - and I apologize if this veers too close to politics - I watched both the 538 and the Crooked Media coverage of the 2020 Dem running mate selection process and who they thought would be selected and why, and it was very clear which group of people had been in the room where it happens. Armed with that knowledge, the next few weeks of headlines were entirely predictable.
A puzzle for those who enjoy such things: for which values of n is it possible to paint the vertices of an n-dimensional hypercube in n colours, in such a way that no point is adjacent to two points of the same colour?
Wait, this is asking whether you can n-colour an n-demicube, because hypercubes have no odd cycles and so the distance-2 becomes distance-1 in the alternation.
My guess is therefore "yes" for n > 3 since it can't contain an n-simplex (though it will contain (n-1)-simplices). Bugger if I know how to prove it, though.
Consider that if we have a correct coloring for the n-hypercube (in integers mod n), we can define an injection f from the vertices into the vertices such that color(f(v)) = color(v)+1 (at each vertex, choose the unique neighbor that has the next color - if the function so induced failed to be injective, this would mean n=f(a)=f(b) for a!=b, from which a,b have the same color and share the neighbor n, so the coloring is incorrect). This induces a sequence of injections from each color-class into the next, so all color classes have to be of equal size.
This means that our number of vertices, 2^n, is divisible by the number of colors n, so n has to be a power of 2.
I'm guessing that there's a general construction for n a power of 2, but I haven't been able to find it yet.
That's correct, and it's a much more beautiful argument than the one I was going to use. The general construction is:
Gur pbbeqvangrf bs rnpu cbvag tvir n fgevat bs a mrebf naq barf.
Ahzore gur pbbeqvangrf naq gur pbybhef va ovanel; pbybhe rnpu cbvag nppbeqvat gb gur kbe bs gur cbfvgvbaf bs gur frg ovgf va gur pbbeqvangrf - sbe rknzcyr, va 4q gur cbvag 0101 unf 1f va cbfvgvbaf 1 naq 3, juvpu ner 01 naq 11 va ovanel, fb gur kbe vf 10 naq jr pbybhe vg va pbybhe 2.
Guvf zrnaf gung rirel cbvag vf nqwnprag gb cbvagf bs nyy pbybhef.
Easy answers: possible for n = 1, 2, 4; impossible for n = 0, 3.
Isn't this possible for all n>1? Just set the color of a point in {0,1}^n to be equal to the sum of its coordinates modulo n.
You're thinking of "no two points of distance 1 share a color", this is "no two points of distance 2 share a color".
My mistake, I figured I was missing something.
Is this related to the coins on a chessboard puzzle? (isomorphic to: for which values of n is there a solution to the coins-on-a-chessboard-with-n-squares puzzle?)
Well observed - that was what I was thinking about when I came up with it.
Hanania says that people go to grad school rather than become welders because they value status and influence above money. I didn't go to grad school for two years and get a master's degree because I wanted status and influence. That's silly. I did it because I wanted a white-collar career for which a master's was the entry credential, a career that I'd enjoy more and be better at than I would be a welder.
I think you're just unable to see the truth of the matter because it's too baked into your worldview. It's a career that you'd enjoy more and be better at than a welder because it's higher status. You wanted a white collar career because it's higher status. You'd like being around people with degrees because they're higher status.
There's a general college earnings boost and there are certain gated high paying careers (doctor, lawyer, scientist). But if you're maximizing pure ROI then a coding bootcamps or elevator tech schools or whatever often beat median colleges in lifetime earnings, debt to income boost, etc.
Try asking yourself why you want a white collar over a blue collar career. It's probably a bunch of classist assumptions. Not that there's anything wrong with being white collar! But it's clearly the more socially privileged workforce regardless of raw income.
Nonsense. I wanted a career in software development because I absolutely adored computers since I was vey young, and was never interested in cars or ... what else do blue collar workers do? Anyway, higher status? ppsh. *farts loudly* My car is a rusty 1999 Ford Taurus that squeaks really loud.
I mean, you are right in some sense, but.... I am socially liberal and another important parts of my worldview are that 1) I do not want to do hard physical labor and 2) I want to do intellectually complex things instead of dull repetitive tasks.
It is also of course true that ceteris paribus 3) I want to do higher status jobs rather lower status jobs, but 1) and 2) are imho sufficient explanations why I would require hefty wage increase if someone would want to poach me to a welding job.
Statements like these reek of people who don't actually come from a blue collar background. My parents pushed me into college education and white collar work because my dad had seen his own body, his father's, his grandfather's, and his great-grandfather's bodies and energy destroyed by a lifetime of manual labor. Even skilled manual labor can destroy you.
And I'm damn glad they did push me. As it turned out, I ended up having quite nasty spine problems in my 30s. There was a period of 3 years where I could scarcely get out of bed some days and couldn't put my own shoes on. But I was still able to work through a lot of that period, which I absolutely would not have been able to do if I'd become a welder.
I think you would dramatically overestimate the sophistication behind my desire to play with spaceships and ray guns.
From a comparative advantage perspective, high IQ people going into white collar professions is good for people with lower IQ.
Only if they're more productive in white collar professions. See how classists assumptions are so baked in you think they should go unstated? You're presuming a smart person is more productive in an office than working with their hands. You're also conflating IQ and education.
Same. I'm getting through my STEM degree 'cause I want to get payed more for what I can already do, and because people in trades often retire totally physically broken.
But, I'm also left as fuck.
Same here: I did a two-year masters solely because I wanted a much more interesting job and a little bit more money (it worked, hooray) and I don't think a huge deal of status came with it either. A more fun answer to the "so what do you do" question at barbecues, perhaps. This was, admittedly, STEM.
Another reason, apart from status, power, money, interestingness of the work and not wanting to breathe in zinc vapour all day: I'd rather work with folks with higher level degrees than with welders. (no offense to welders)
Anecdotally, that seems true of my own social circle as well. The stereotype about people who flip back and forth between grad school and being cashiers or baristas is accurate to my experience.
I'm not exactly sure what being a welder entails, or how one gets into it, but my vague impression includes standing in a garage a lot wearing protective equipment, and possibly moving heavy objects around.
I do think there's something to the idea of conservatives wanting to be able to make money sooner, so that they can have more kids, possibly at the expense of quality of life later on. So you have men who can apprentice early in construction sort of trades, stay at home moms who need a husband with steady income, and (later in life) those on disability from working in physically stressful jobs -- all very conservative categories. Which is a life path focused on getting started as an adult with a family much earlier in life, vs getting a grad degree and marrying at 30.
Having taken welding classes and learned to code, just as a way to spend your time, if you must work, coding all the way. Welding is also hazardous to your health.
Yeah, I feel like the undercurrent within my childhood circle of blue-collar overachievers was something like “We must excel at all things, else we be menials!” My dad's aspirations for me were to get a job where I wouldn’t have to work out in the rain. It really could have gone either way; if I hadn’t been “a student” as my family called it, I’d be doing a blue-collar job, too. It was largely luck that made me a nerd, and going to grad school and into a white collar job was more or less deciding to stay on Standard Nerd Track.
"Indoor work with no heavy lifting", as Terry Pratchett put it.
mailman; best blue collar job in America. used to be better but rationalization has turned the letter carrier into more of a machine. fresh air, sunshine, a walk in the park, depending on neighborhood. "Neither snow nor rain nor heat nor gloom of night stays these couriers from the swift completion of their appointed rounds." the snow, the rain, nice hot days, they were all great. opposite to welding or sitting all day coding as far as health outcomes. In my 100,000 rustbelt blue collar town it had status because it was relatively high paying and secure, and the moderate physicality signaled health. also competitive to get in. mostly memory and attention to detail testing. you had to score pretty high to have a chance. In Cleveland one year 10,000 people showed up to take the test.
Interested in hearing peoples first hand experience with brand Adderall vs generic. I had been under the assumption that generic drugs are always identical to name brand. I started on name brand Adderall a couple months ago and then recently switched to a generic version. I felt like the name brand was less effective, so I did some googling and there are other people who feel the same way, citing things like different filler ingredients:
https://www.reddit.com/r/ADHD/comments/2nyshz/for_the_love_of_barkley_please_explain_this/
I've gone back to name brand and the effectiveness has returned. There should be no difference in the active ingredient so I am a little confused. I am open to it being a placebo effect, but I noticed the decrease in effectiveness while I still believed them to be equivalent, so that makes me a little skeptical.
Scott had a long article about adderall on his practice’s site - it notes different experiences with amphetamine brands are often results of the difference in absorption rate into the blood form the gut, which is mediated by the packaging and salts and a number of other factors. That could’ve contributing. Note the variety of differences in the section about different brands and formulations. There’s also isomer differences which might be relevant but idk. I’d personally recommend against stimulants in any cases, but there’s some info.
https://lorienpsych.com/2020/10/30/adderall/
Have you ever tried a double-blind test? Like, Buy some adderall and some generic; have somebody randomly assign them to days; take them without looking so you don't know which you have taken; try to tell afterwards which one you took today?
Can't speak to adderall, but I remember at one point definitely getting the impression that one type of Concerta was significantly less effective than another. The pills looked different, and I forget if I just had to live with it or switched to another pharmacy that could get the one that worked. Don't remember more details because this was quite a few years ago.
My wife and present house guest both have ADHD, and from what they've told me, it's not some general difference between name brand and generic drugs. It's specifically that Adderall made by Aurobindo is trash, for some reason or other having to do with the fillers.
Very much this. Aurobindo has been cited multiple times by the FDA for failing to meet manufacturing standards. Also, generics do not have to be "exactly identical" to name brands, they just have to have ±20% of the active ingredient in the blood stream, with a 90% confidence interval:
"To demonstrate BE [bioequivalence], the statistical analysis must show that the ratios (generic to RLD) of these parameters remain strictly within a 90% confidence interval of 0.80 to 1.25"
"I felt like the name brand was less effective"
"I've gone back to name brand and the effectiveness has returned."
Sorry did you mean to say that the name brand was more or less effective than the generic?
Sorry, it should have said the name brand was more effective
I have heard that also. Somebody I know who had initially taken Adderall, & then found the generic less effective, spoke with a pharmacist who told him that the Teva generic is made by the same company that makes Adderall and is essentially the same stuff. Person tried Teva generic and believed that it did indeed feel just like Adderall. He also commented that it tasted just like Adderall, whereas the other generics did not (he breaks his tablets in half, and sometimes just bites them in half, so is familiar with the taste).
As a soon-to-be parent, I’d like to ask for recommendations for parenting books. However, I’m looking for a specific sort of parenting books - which I call Outlier Parenting or Extreme Parenting.
Let me try to map out what I mean by Outlier Parenting.
I would define it as: Parenting that seeks right tail results via right tail methodology.
Outlier Parenting is not data-driven. This is because outlier parenting is rare enough not to have enough data points for analysis. Whereas many parenting books speak to the mainstream parent and focus on the normal distribution of child outcomes, outlier parents - both in their behavior and in their goals - are seeking to be on the far end of the right tail of the distribution.
As such, I would expect that each set of Outlier parents have their own specific ingredients and methods for parenting.
Example: Scott Alexander had written about and reviewed Polgar’s “Raise a Genius!” I found a lot of value in the book. But I want more… much more.
Another example: First 30 minutes of Captain Fantastic. Ok, fine, this isn’t a book, it’s a movie. And it’s not even a documentary, it’s fiction. But that’s an example of the type of Outlier Parenting that I would love to read about.
I’m not looking for books that tell us how mainstream education is bad. At this point, the existing education system has become a straw man for people like me.
I’m also not looking for books on homeschooling. Whereas “traditional schooling” has become so rigid that we know exactly what to expect, “homeschooling” as a term is so undefined that it can mean 1 million different things in 1 million different homes.
I’m also not looking for books on unschooling, unless unschooling means lots of goals, tons of work by both parents and children, and at least some structure.
I’m also not looking to engage in the nature versus nurture debate. Clearly (I think), my target for parenting books will focus more on the nurture side.
I’d like to read books by people who set out to raise superhumans. I’d like to know the details of their methodology, experience, and the results and lessons.
If such books don’t exist, I would appreciate any other leads (blogs, diaries…) Thanks!
I would read Alison Gopnik. But she is going to tell you, (as is just about everyone else who has had success working with kids), that trying to build your kid into something specific is a heart-wrenching journey.
I wonder if replacing "specific" with "objectively awesome" would still allow your to draw the same conclusion.
It depends on what you mean by "right tail results." To me "right tail" denotes something very improbable. Is this raising kids who will thrive in an improbable future? Then they should be adaptable, have lots of grit, take initiative etc. Or do you have a specific right tail risk in mind? In either case, I think child psychology books would be a good basis because you will want to consider things like the effect of peer group vs family group, or what periods in a child's life are most sensitive for the development of language etc. I am curious what you would consider extreme parenting. My kids have spent their entire life so far in African countries, and I was off-grid, 45 minutes from the nearest phone when I was a kid, and I would not consider either of these situations as extreme, because many people grow up in this way. The book "Child of the Jungle" is about one form of extreme parenting and the difficulties the author faced when reintegrating into modern society.
Thank you for the book recommendation. This does indeed seem like an example of extreme parenting.
When I refer to "right tail", I'm not speaking in terms of risk, but in terms of a general normal distribution, i.e. the right tail of aptitudes, outcomes, time investment, happiness, productivity, etc. For example, let's say a set of parents conclude that the +- 2sd range for daily "high-engagement learning" is .5-4 hours, and decide to invest x time and y resources to provide their child with 10 hours per day of "high-engagement learning". You get more of these stories about golf and tennis, which are interesting... but I'd like to find out what else has been tried, how it was tried, and what were the outcomes.
Your comment that many children are raised off-grid is accurate, in that there are 10s of millions of children that may fall in that category. The difference with a "Western" parent choosing this path for their child usually will require quite a bit of intentionality. This intentionality may be selfish or out of necessity (less interesting for my purposes), or with specific goals for the child in mind. I'm seeking out stories (methods, outcomes, successes, failures) of the latter.
Parents have a natural drive to invest in their offspring and it seems that any parenting trick that leads to greater outcomes for the children will be copied by many.
Trick, perhaps. Thousands hours of additional work, maybe not. At least not in the social circles I travel in.
Not sure how far out on the right-tail my kid will ultimately get, but he’s doing quite well so far. We haven’t been obsessive about it, but we’ve been decisively in favor of him developing any latent superpowers he might possess. You’ll find that a lot of parenting is improvising. There really aren’t any grownups. Everyone’s kinda making it up as they go along.
For what it’s worth, my experience consists of raising one boy who’s currently 16. His whole life, we’ve regularly gotten effusive compliments from other grownups (teachers, parents of his friends) on what a great kid he is. Most of that is just him being his excellent self, but here’s a few suggestions on things that seemed to work well from a parenting perspective.
We only had “the one big rule”: Don’t Get Hurt. Too many people have too many rules for their kids. (He’s always had empathy, so we never had to state the complementary rule “Don’t Hurt Other People”. Suggesting “that might hurt so-and-so’s feelings” was enough.)
Get in the habit of carrying a handkerchief: it’s super-useful when they’re really little.
Our kid talked early and often, but he could understand speech and use a few basic sign-language moves for a couple months before he started speaking; the signs for “more” and “all done” are especially useful. The sooner they can consciously communicate, the better.
Good manners will take you a long way, and they don’t cost much; bad manners can get real expensive real quick. If you practice good manners at home (just simple old-fashioned stuff like saying please and thank you to your significant other for every little thing, for example) your kid will soak that up, imitate it without even thinking about it, and get more cooperation and extra respect from most other people with very little conscious effort for the rest of his or her life.
Only say no when you really mean it, and always explain why. None of this “because I said so” bullshit. If you don’t mean “absolutely not, that’s a flipping horrible idea and here’s why” then don’t say no. Say “not today” or “maybe, if we have time” or “I’d rather you didn’t, because,” etc.
Don’t bullshit your kids - I mean, believing in Santa is kind of a fun game (and the eventual disillusionment gets them used to the idea that mythological-sounding stories probably aren’t really true) but, in general, give them as much of an honest answer as you think they can handle, for any question they ask. If it’s something you don’t want to explain, you can explain that (i.e. “oh, that’s a gross joke about sex - I’d rather not go into the details, ok?”)
Praise and thanks are best when immediate and specific (“thanks for helping clean up for the party - yeah, stuffing most of your toys into the closet totally works. That was a good call.”)
Correction should be mild and certain, and involve a dialogue, not a lecture: “we left the party and you’re going home to have a time-out because you bit that kid. Oh, you bit him because he was holding you down while that other kid punched you? Ok, that’s a pretty good self-defense move; I can see why you did that. No, we’re not going back to the party now. I mean, a party where you get into a situation like that, that’s not a good party. Well, I’m sorry you didn’t get to have cake, but there will be other birthday parties.” Time-outs were 1 minute for every year of age: hardly ever had to use them, never after he turned 6.
When possible, let them have a turn calling the shots. “Do you want this for lunch, or that?” “What do you think we should draw?” Kids have so little control over their own lives, and they need all the practice they can get making decisions. The sooner and more often you can allow them to exercise some control, the better. “Do you want to go on this ride, or that one? Or maybe that other one first?” (Pro tip: it’s also a great sneaky way to steer them away from stuff, by not listing options you don’t want them to choose while giving them something else to think about and a gratifying feeling of agency.)
Prioritize giving them your attention and being patient. They will want to tell you about all sorts of things you may have little interest in, and it is a pain in the neck when you have to get up in the middle of the night and change the sheets because they wet the bed. Patience and empathy are essential virtues here.
You will have occasion to apologize to your kid: I recommend short, simple, direct, slightly on the formal side but sincere. “I’m sorry mommy and I were squabbling; I’m sure that was no fun for you. People just step on each other’s toes sometimes - I think we’re all settled down now. But I’m sorry you had to listen to us yelling.” (Still happily married, btw!)
Hope this helps - everyone’s got their own row to hoe, YMMV, etc.
"Bringing up Bebe" is my favorite parenting book, but I wouldn't recommend reading books that are directed at parents of kids much older than your own. They will give you a long list of things to worry about and dread, but no individual kids hit all the lowlights of parenting. Better to wait until you learn what particular problems and opportunities you will actually face. Remember, no plan survives contact with the enemy, and your parenting will likely be dictating in large part by your kid.
You're not "trying to get into the nature vs. nurture debate", right, but it's not a debate anymore (scientifically) and hasn't been for 20-30 years: Parenting doesn't have much influence on the basic characteristics of children (personality, intelligence, etc.). So if as a parent you want your child to excel in a specific area, you can "only" increase the amount of time he/she spends practicing that thing, which means doing less of other things, This also implies deciding for the child what area he/she is supposed to excel in. And since education doesn't change intrinsic abilities, if your child isn't quite good at the area you've selected for them, they won't in fact excel.
Anecdotally, I watched Captain Fantastic with my 16 and 12 year olds a few months ago and they thought the kids' education was horrible.
Thank you, Emma, for your recommendation of books on Outlier Parenting. In response to your comment, I can only surmise that your children were genetically predisposed to disliking Captain Fantastic's alternative education methods.
I'm sorry I wasn't helpful. I disagree with your goal, not because I disagree with your goal (every parent has different values about education, of course), but because I feel that your goal is based on a false premise, that it is possible for parents to make geniuses out of their children. I think that it is not possible, because a huge amount of studies say that parenting has little influence on the things that are, in my opinion, associated with geniuses.
If you really are interested in "lots of goals, tons of work by both parents and children, and at least some structure" you could try "Battle hymn of the Tiger mother" by A. Chua, who explained how she did just that with her daughters (and with, in my opinion, previsible results).
Emma, Thank you for your suggestion and sorry for the snide reply.
The only personal goal that I had stated in my original post was seeking a specific type of literature. Beyond that - my two examples were of a parent who focused on chess and a parent who focused on liberal arts and wilderness survival. How somebody might guess my specific conclusions and child rearing takeaways from those two works is beyond me.
I'd be lying if I said that I wasn't disappointed by the fact that most of the replies to my query were of the discouraging "your children will hate you" sort. I didn't expect such a closed and anti-curious response from this community.
I will also say that despite its title (Raise a Genius) and everything presumed about my goals, any parent who reads the Polgar book and finds nothing of value in it perhaps is not best qualified to comment on parenting threads. And that includes even the most ardent genetic determinist.
Emily Oster’s books
A lot of superhuman people have terrible parents.
Have a look at William James Sidis.
Lord Chesterfield’s Letters
Not being interested in the nature vs nurture debate means that you might potentially waste a lot of effort trying to raise a genius child and create long-lasting resentment towards you in your kid.
My opinion also!
Free Range Kids perhaps. I haven't read it.
Will the unfortunate child have any say on the experiment to be performed on it?
This isn't really about outlier parenting in the sense that it's about trying to produce geniuses, but its advice seems to me really essential for anyone with that particular aim: Alfie Kohn's "Punished by Rewards," or perhaps some shorter works by him would work too. The argument is that rewarding or even just praising children for doing what you want often backfires. Not a parent, but I found it convincing.
Does Kohn talk about what to do when the child doesn't do what you want? I grew up in an environment with blame but no praise, and it wasn't great.
Oh, that's, uh, not the recommendation. Punishment and blame are just as bad as reward and praise.
The stuff about rewards is mainly about things where you want the child to *want* the right things, or *like* the right things: so rewarding or praising children for studying or learning or being kind to others is a bad idea, because you need them to end up wanting to do those things for their own sake, so you don't want to make their motivation to do those things end to get tied to rewards or praise. He cites a bunch of research showing that both children and adults perceive activities as less intrinsically desirable after getting rewarded for doing them--it's like they reason that, if they have to reward me for doing this, it must be because it's not actually desirable to do.
I don't remember if he had detailed suggestions about what to do when the kid does bad things, but I'd guess his suggestions there would end up closer to the liberal mainstream (e.g., try talking to them rather than a punitive approach, whenever possible).
Question, how many soon to be parents are on this blog? Because I seem to see this or adjacent questions quite a bit here, so that makes me curious. (And pls keep the recommendations coming, although I'm personally some years away before it'll be relevant for me)
I've got a two year old and another on the way.
I’m on the other side with a 3 month old at this point. My life is definitely scrambled but there’s fun moments in between a major Increased level of effort in home life.
I don't know you and this may go without saying but I hope you've considered how to handle a child who ends up unable or unwilling to deliver the kind of outlier results you're hoping for. It kind of sounds like you're planning for your kid to be a genius before they're even born and that seems like it risks a lot of stress and conflict if things don't go as well as you're hoping.
If I promise to love and not abuse my child, is there a chance you might give me a book recommendation on Outlier Parenting?
Try Bruno Bettelheim's "The Good Enough Parent." It's not a how-to about parenting but more like a how-to raise a child into an excellent adult, in all the meanings of that word.
The child prodigies on YouTube are usually excelling in something their parents already do well/love/devote significant time to (the physical skill ones at least). The parental involvement, dedication and ability to impart technique elevates the situation above “talented kids taking lessons.” My impression is it takes more than one generation to pull that off. And you have to put the kid in it very early to make them feel like a fish in water in those skills. So whatever it is that you do, live and breathe, those are probably the candidate fields for superhuman status for your child. If I’m thinking of the right movie, the superhero grew up living it with every moment an opportunity for additional skill, so she became next-level. The child’s natural aptitudes can influence it too. But learning self-care outside the social norm is also key, because kids in the US are bombarded with messages about the importance of goofing off, the emotional necessity of wasting time. So to have them learn to identify and handle emotions, be rigorously and positively honest and release stress, will separate them from a super kid who will burn out at 19 from a super kid who can transition to adult success. That’s a hurdle many people can’t clear.
One of my kids is startlingly good at something which he recently connected with, the other is still looking, I am probably an example of burnout, so this is is from the perspective of “what not to do” as well as “what to do.” Workaholism not at all identical to what you are talking about.
Venus and Serena Williams come to mind, Tiger Woods to an extent. Their backstories include some of the factors you mention. Also bassist Victor Wooten. If I think of any others I’ll post.
"The child prodigies on YouTube are usually excelling in something their parents already do well/love/devote significant time to (the physical skill ones at least)."
I think that this is a very important point! I was impressed by the oppressively strict parenting style of Amy Chua, which she described in her bestseller of about ten years ago, The Battle Hymn of the Tiger Mother. To keep things short: A Chua, who is a law professor , forced her daughters to spend all their free time practicing music, so that they could excel. Right now, her two daughters work in law.
That’s fantastic and also hilarious, I will have to look for Chua’s book.
Something that is consistent with your examples is Polgar's clear focus on a single discipline in his approach. I would be interested if there are documented multi-disciplinary approaches that show promise.
Polgar's clear focus on a single disciplines still had a space for some extra things. His proposed ideal day: 4 hour chess (or whatever is your specialization), 1 hour languages, 1 hour computer science, 1 hour humanities, 1 hour physical education.
I imagine you could tweak it to include two or more specializations. The easiest way would be to switch the specialization every two or three months; rotate between two or three specializations.
I also wonder what happens if one chooses to "maximize" a soft skill. I think that soft skills are by default interdisciplinary.
Also there’s a type of radical acceptance which I think is different from the “I make my kid into a genius.” More the “down to the last detail, what environment can I create that will have as an end result the child having necessary skills to achieve?” Of course the kids crack up later if you live your dreams through them, if they become a robot they crack up when they meet adult society, if their actual skills and needs are not honored they are always at cross-currents with themselves in a detrimental way. If it’s power-over, then in order to become fully adult they have to reject everything you taught, which is not the goal. But that being said I think recognizing and leveraging those things is 100% possible. There are very highly competent people with a wide range of initial talent/interest, however that’s measured.
Now I’m curious too. If I find anything I’ll post although it won’t be for a few hours.
Question: What is the difference between life on Earth and life on a large, luxurious, self-sustaining spacecraft lost in space?
In the movie/poem Aniara, a large (4,750 meters long and 891 meters wide) and luxurious passenger spacecraft (named Aniara) leaving Earth for Mars is hit by some space debris. This results in the spacecraft losing control over its trajectory and they are now unable to return to mars nor earth. The spacecraft is self-sustainable, but after a few months the passengers are reduced to eating algae. The movie explores how meaningless life seems to be for everyone on-board.
I suspect that most people, like myself, believe that life on Earth is meaningful while life on Aniara is not. I find this to be strange since ultimately both Earth and Aniara is just a large thing floating in space, so somehow life should be equally (not) meaningful. So, what are the key differences between Aniara and Earth? Or in another word, what features should Aniara have to make life feel meaningful?
Another interesting question is how the psychological aspects of life on Aniara compares with early humans' life (under any reasonable interpretation of early humans). Is the psychological struggle of Aniara similar to the struggle of early humans? Perhaps this is why religion is so universal?
Many people derive value and meaning from doing useful work, where "useful" means making something better than it was before. Some people do this through their job, others just use the job to pay the bills while they raise children who they hope will exceed their parents. And some will substitute "avert catastrophe" for "make something better". But if you're on a spaceship which is A: near capacity and B: safe and secure and C: definitively not going anywhere, then you've got precious little avenue for any of those. Which is likely to lead to ennui, despair, and maybe nihilistic religions.
There are also people who are content to coast through a comfortable life, accomplishing little, but those are the people least likely to find themselves on a spaceship. A luxury liner, maybe, but even there you're probably selecting for people who want either the experience of seeing strange new worlds. or to come home with the status of having made the trip, and you're losing both of those as well.
Also, with apologies to Harry Martinson, "Aniara" is no longer a ship, it's a fleet - the last survivors of Sjandra Kei, pursuing the Blight to mutual annihilation by the Countermeasure and the Godshattered remains of Pham Nuwen. If your world is beyond hope and what remains of your life is in the depths of space, that's a much better path to follow.
The big problem with Aniara is that it's in a locked state in which you can't improve things due to the lack of raw materials.
(If there were enough materials on-board to repair the ship to a not-eating-algae, bound-for-a-planet state, I wouldn't consider it meaningless.)
There are many lost space ship stories in sci fi. "Orphans of the Sky" by Heinlein comes to mind.
Thanks for the recommendation! I'm also currently reading "The Freeze-Frame Revolution" which has a similar theme.
Hmm, not really a recommendation. I don't much like it, but there is a genre of 'lost in space' stories. "Rendezvous with Rama" by A. Clark... there must be hundreds of others.
What's different between earth and LLS-SSLIP
-Size
-Variety
-Amount of mystery, sense of things to discover: Lots we don't know about Earth & its inhabitants and history.
-Otherness: Earth was not planned and built by us, LLSetc. was
-Temporal depth: Earth is vastly older
-Human temporal depth: Vast number of members of our species have lived on Earth before us. Knowledge of their past presence enriches our experience.
-Ties to our human past: Bones of our lost loved ones are on Earth, not on LLSetc.
-Number of possible present and future human ties: Way more possibilities on earth.
-Cultural depth: You can take books and music and art onto the LLSetc., but they are about life on Earth, not life aboard the LLSetc. LLSetc. has little cultural depth.
Thanks for the answers.
- Size and variety: What if Aniara is the size of Texas?
- Amount of mystery, sense of things to discover/otherness: I think you're onto something here but mystery is not quite the right word. For instance if Aniara grew every year in some random way (e.g. an AI generating a lot of new random rooms every year) then we will have plenty of mystery, but I think it's still meaningless.
- All points related to the past: Yes this is an important point, and is related to my question regarding early humans. The issue is that some of the earliest homo sapiens (is this even well defined?) cannot rely on this. Do they experience this meaninglessness as well? Or perhaps they're too occupied with the terror of death?
Now that I think about it is this more about sense of security? For instance if I was born in Aniara as the 100th generation then I'll probably be convinced that life is going to run for 100 more generations, and somehow this generates meaning. So history is only useful to convince ourselves that there's a future.
Also for what it's worth there's some amount of cultural depth (new arts, religions, etc) in Aniara but clearly not enough.
This point about the future generating meaning is an interesting one that was first clearly pointed out to me by an article by Samuel Scheffler in the New York Times, discussing the movie Children of Men. I think he explores this theme further in his book from around that time, but I haven't read it yet (and unfortunately, I never actually took the time to get to know him, even though he was a professor in my graduate program - I thought of him as working on very different things from me at that time).
https://opinionator.blogs.nytimes.com/2013/09/21/the-importance-of-the-afterlife-seriously/
Thanks for the book recommendation! It seems extremely relevant and I will definitely read this. (The book is Death and the Afterlife for those too lazy to find it)
So the freakin AI adds new random rooms every year. That's not plenty of mystery, that's Mall of America.
Do you have anything specific in mind that will constitute as plenty of mystery? Maybe if it somehow generates forests/mountains/caves? I guess at that point that'll be so different from our life that it's hard to imagine how that'll feel.
Well -- you could have AI perfectly reproduce parts of Earth, landscape complete with fossils, vegetation, etc. But that idea's cheating I think.
You could have the AI carry out some processes that are approximations of some on Earth. For example, whatever it is that waves do that scupt natural caves, form the beach into ripples, etc. So then instead of rooms you'd have areas of the ship that were beach-like, rockface-like, etc. Areas would be irregular but not random, like so much of earth is -- mt ranges, waves, coastlines, tree branches. Spaces where you can't predict, just from the features of one area, what's happening at the adjacent spot -- and yet the human eye senses the order there, the deep nested regularities. You know, fractals etc.
Or you could have LOTS of mystery and richness if the AI set some organic process going. -- some living thing that reproduces and evolves and uses some of the ship for food and then creates new parts of the ship as it goes about building nests, discarding waste etc. Problem is, the organism is going to view the human occupants as just raw materials for its life. So now instead of Mall of America you've got life in the movie Alien, which sucks marginally more than M of A.
Variety. There is a huge amount of stuff on earth; way more than you could see in a life time. Even if you don't see it, you know you could.
In a space ship, you could walk every corridor and talk to every person.
So you're saying that a sufficiently large ship, say the size of Texas, with a decent amount of variety will be meaningful?
I haven't seen the movie, but from the summary on Wikipedia it seems like life on the Aniara is neither "luxurious" nor "self-sustaining." First the ship loses propulsion, then the VR system they use to make life on the ship tolerable breaks down, then they're reduced to living on algae, then the power goes out, and then everyone dies. Life seems meaningless because the ship is doomed and it's only a question of how quickly they'll die, and how much they can lie to themselves about it. That seems more salient than simply "being on a spaceship" - if the ship wasn't breaking down, it would just be one of those neat sci-fi stories about life on a generation ship.
Like, I don't think people would say the Quarians from Mass Effect live a life devoid of meaning, even though they not only live on spaceships but spend most of their life inside environment suits because of the danger of disease. Their society is stable, self-sustaining across many ships, they've developed a culture around their ships and suits, and they even have the hope (slim, until the player intervenes) that they might once again see their homeworld.
I would say the Aniara seems meaningless because it's not only doomed, but doomed on a human time scale rather than in the sense of "in a million years the Sun will engulf the Earth." And indeed, what Aniara reminds me of on Earth would be the extreme climate doomers - the people who say "I don't want to have children because they'll be born into a world that we've destroyed."
It's one thing to know that, in some distant, abstract sense, everything you do is impermanent. It's another thing to know that the inevitable doom could happen in your own lifetime.
Yes, I intentionally gave a more generous description of Aniara to make the question more compelling. In theory Aniara is really self-sustaining if it's (manually) maintained well enough, which didn't happen in the long run because, among other things, people start killing themselves a few years into the trip. So in the movie the societal collapse induced the infrastructure collapse, not the other way around.
I googled a bit and it seems like the number of humans necessary for a sustainable colony is surprisingly small (all of the numbers I've seen are comfortably below 1000), so in theory they're not doomed on a human time scale. But I agree that most of the passengers feel doomed in the human time scale and this is a huge part of the equation.
Your point regarding extreme climate doomer is very good. I suspect most people with extremely pessimistic view of their life or the world in general doesn't see meaningful difference between Earth and Aniara.
This story sounds overly pessimistic. In general, people facing hardship tend to be remarkably good at persevering and ingenious in coming up with reasons to do so. Countless examples demonstrate this.
I suspect the feeling of meaninglessness comes from having specific goals that you care a lot about but don't think you can ever achieve. From your description, I'd guess goals are probably
1) Make significant long-term improvements to your society (the Aniara)
2) Reunite with the rest of the human species
My impression from the summary was that these are the main problems. (1) - life is now, for our people, about as good as it will ever be for our people, with no hope of improvement. (2) - most people are not in our group, and our group will never know what any of them are doing or thinking or caring about.
I see. I guess 1) is not possible because there are no resources to do, say, non-theoretical scientific research? I think for a very long time essentially all humans don't make significant long-term improvements to their society. It's still true now but it's especially true thousands of years ago, I think. Is the goal/hope of doing so sufficient to generate meaning? For what it's worth I also think that the typical human's goal is way more local and modest: Do good things for me and the people I like. Perhaps your point is not that I cannot achieve goal 1), but that no person can achieve goal 1) and hence society will be "bad" forever? (e.g. It's not that I can't make a new iPhone, but it's the fact that no one can make a new iPhone hence society is bad)
I agree with point 2) to an extent. Again I think most people are mostly interested with the more local things i.e. reuniting with family and friends.
Thanks for the answer! This has been bothering me for many weeks.
I think a lot of people hope that their children will have a better life than themselves. I guess that doesn't necessarily imply that society in general is improving, though it requires that there are opportunities for at least local improvement.
Of course, one also should not discount the extent to which the storyteller is framing the story to evoke a particular feeling that might or might not be how people would actually feel about it.
An economics question –
Is there statistical research showing that undistributed earnings later return to shareholders?
I don't think your question makes a lot of sense. Undistributed earnings can be distributed to shareholders or reinvested in the business. Sometimes it is invested unsuccessfully (Time Warner purchasing AOL) and sometimes it's successful (Amazon). When it's successful, it just turns into more undisturbed earnings.
I think the boring answer is that US equity investors have historically earned a premium of around ~6.5% over Treasuries. Looking at research on the equity risk premium might help answer your question.
Thinking about your question more, I think you're asking if an accounting principle (retained earnings) is correlated with future equity returns. There's a huge world of research there, starting with Fama and French's factor model
I’m mostly interested in the observed correlation strength between undistributed earnings and later dividends/capital gains for the recent decades. Is there a reference you can give me for the US companies? Thanks.
Anyone here have experience/knowledge of NMN? I'd in particular be interested in:
1) An ELI5 explanation of what it does and how that is supposed to slow down aging. I've tried to look around a bit, but a lot of what I find is way, way over my head.
2) Relative estimates for how likely it is to do anything helpful/harmful.
3) A good (preferably high status) place to point family members towards to convince them you're not just taking creepy drugs because the internet told you to.
ELI5 - we know that NAD+, a critically important substance for every living cell, declines with age. The idea is that by providing NMN, which is a precursor (raw material) to NAD+, you'll restore the NAD+ levels to those of a younger person. This is promising not just because a critically low level of NAD+ will kill you (the levels are not _that_ low), but because the level of NAD+ regulates the activity of many enzymes. Thus, providing a level closer to a young human may, in fact, make you function more like a young human.
The main failure mode here is, breaking the thermometer to solve the room getting hot. The entire field of aging research knows a lot of correlations, but not many causal chains. There are plausible mechanisms how it can mitigate the impact of some age-related diseases. It's also possible that the entire thing is a red herring and the low NAD+ levels are incidental.
It looks almost certainly safe, and either placebo or somewhat beneficial. I wouldn't spend my life savings on it, but would consider getting it at non-ripoff prices.
Unfortunately, all actual high status places I've found are PubMed, which your family members probably will find baffling. There was at least one clinical trial: https://clinicaltrials.gov/ct2/show/NCT02678611
https://www.elysiumhealth.com/en-us/basis/human-clinical-trial-results
I know of at least one study that seems to show a causal effect, where boosting NAD using niacin restores muscle function in Mitochondrial Myopathy:
https://www.cell.com/cell-metabolism/pdfExtended/S1550-4131(20)30190-X
I would very much like to request Scott - or someone else who feels they could tackle it - to do a post about the history of the teaching of science in schools, particularly with reference to the vexed question of Human Evolution.
I've had some small exchange of views on this in another comment thread, but I don't know enough. All I really know is (1) the Scopes Trial happened in 1925, and there was such a lack of people lining up to hire lawyers on the basis that "I was fired simply for teaching the science!" that they had to advertise for anyone wanting to take such a case (2) "Survivals and New Arrivals" by Hilaire Belloc in 1929 twitting the Protestants over such controversies (though, being Belloc, he backs Lamarck over Darwin) in criticism of Biblical Literalism:
"The Literalist believed that Jonah was swallowed by a right Greenland whale, and that our first parents lived a precisely calculable number of years ago, and in Mesopotamia. He believed that Noah collected in the ark all the very numerous divisions of the beetle tribe. He believed, because the Hebrew word JOM was printed in his Koran, "day," that therefore the phases of creation were exactly six in number and each of exactly twenty-four hours. He believed that man began as a bit of mud, handled, fashioned with fingers and then blown upon.
These beliefs were not adventitious to his religion, they were his religion; and when they became untenable (principally through the advance of geology) his religion disappeared.
It has receded with startling rapidity. Nations of the Catholic culture could never understand how such a religion came to be held. It was a bewilderment to them. When the immensely ancient doctrine of growth (or evolution) and the connection of living organisms with past forms was newly emphasized by Buffon and Lamarck, opinion in France was not disturbed; and it was hopelessly puzzling to men of Catholic tradition to find a Catholic priest's original discovery of man's antiquity (at Torquay, in the cave called "Kent's Hole") severely censured by the Protestant world. Still more were they puzzled by the fierce battle which raged against the further development of Buffon and Lamarck s main thesis under the hands of careful and patient observers such as Darwin and Wallace.
So violent was the quarrel that the main point was missed. Evolution in general—mere growth—became the Accursed Thing. The only essential point, its causes, the underlying truth of Lamarck's theory, and the falsity of Darwin's and Wallace's, were not considered. What had to be defended blindly was the bald truth of certain printed English sentences dating from 1610."
(3) The big debate/discussion between Darwin's Bulldog and Soapy Sam took place in 1860: https://en.wikipedia.org/wiki/1860_Oxford_evolution_debate
So what happened between 1860 and 1925? What was the state of acceptance of the Theory of Evolution and how was it adopted into school curricula? Did the Northern United States teach it where the Southern states did not, or was it just that there wasn't a big splashy trial in the North? I'm aware of how the science is new and exciting but it hasn't made it into the school textbooks yet, so I'd like someone smarter and better-informed to trace the path of development from "Darwin says I'm a monkey's uncle????" to "sure, of course all our school textbooks contain this!"
Did it get taught earlier in Europe? Was America an outlier? What was the state of play in 1925? Because I have no objection to being called a troglodyte who wants to drag everything back to the Middle Ages, but I'd like to see some *facts* on this rather than the pop culture version of "Ordinary high school teacher doing his job was dragged into court by the ignorant Bible-bashers".
(Nobody has called me a troglodyte, just to make this clear! The other party was exasperated but polite!)
This guy: https://drakelawreview.org/volume-49-no-1-2000/ attributes some of it to the Protestant reformers casting about for another victory after succeeding with Prohibition. As to why the Catholic tradition is less literal than the Protestant, I think Catholic intellectuals are just smarter than bog-standard Protestant preachers, and I speak as a Protestant. I mean, Augustine said "C'mon, people, you don't have to take Genesis *literally*. Sheesh." (That may be a paraphrase.) One commonly-held view about why the U.S. Supreme Court has so many Catholic justices is that they have the conservative views about abortion that the Evangelicals want, but they're well-educated and thoughtful, so safe-ish to have on the Court.
What I really want to dig into and get at is "what was the state of teaching biology, including evolution, in the 20s?"
Because what it seems like - and I could be very wrong, which is why I want someone more scholarly to dig into it - is that local politician gets up on his hind legs and has act passed, everyone says "sure, right, whatever" and proceeds to ignore it - and this was in Tennessee, where the famous trial took place, and which then gained the reputation of redneck ignorance and "Science versus Religion".
But *was* Evolution by Natural Selection 'settled science' in 1920s? What was being taught elsewhere? Were there similar acts in other states, we just never heard of the one in (pulling this out of the air) Vermont because nobody had a show trial there? Darwin's particular theory suffered an eclipse up until the 20s so that a state wasn't teaching *Darwinian* evolution does not mean it wasn't teaching evolution *at all*.
Basically I want to know what I'm fighting about when I'm fighting about the glib assertions that "the Republican Party has to appeal to the religious and the religious are all anti-science, that's why Republican politicians are anti-science".
The Catholic tradition is less literal than the Protestant because *the Catholics compiled the Bible*. In Catholic tradition it has always been a book about divinity, but written by fallible humans, because *they wrote it*, or at least chose what sources out of hundreds or thousands of possibilities to include. Because the people and councils and committees that did this did not claim divine inspiration, no religion that maintains an unbroken line of tradition from those people can claim they were divinely inspired.
It does help that noting that the Bible is just a book, about god but by humans and for humans, leaves power and doctrinal choices in the hands of the hierarchy of clergy, instead of surrendering it to every idiot who can read.
Ummm, no. The Catholics did not compile *The Bible*, the Catholics compiled their version of Bible. Eastern Rite churches compiled their own Bibles. Different traditions eschewed different books. And having split off from the Catholic church, Protestants eschewed some books that the Catholics weren't offended by. I think the Syriac Orthodox Church has the most books of any Christian tradition.
I think just about every Christian tradition eschews Enoch, Jubilees, The Prayer of Solomon, The Ascension of Isaiah (which may have been written after the Council of Carthage), and Baruch. There are a bunch of other books I'm forgetting.
The Ascension of Isaiah and Enoch are fun reads, though! Don't avoid them just because you're Catholic or Protestant.
My old roommate literally wrote her undergrad thesis on this. I’ll ask her if it’s available somewhere!
That would be great, thanks! I'd like to get some kind of overview of "okay, so between 1870 and 1920, when did Evolution The Theory start getting taught in schools and when did it move from 'here's an interesting notion" to "this is the settled science"? Particularly when did schools start using textbooks about "Okay, we've decided Darwin was right after all" because there does seem to have been a period between "fine, we accept evolution, but there are competing theories about how it works" to "fine, we accept Darwin is the winner".
It's easy to point and laugh at bigoted rednecks down South, but were schools up North any quicker off the mark?
One really important point about the period you're talking about is that the "evolution" people were debating about was not really Darwin's version of evolution, but Herbert Spencer's (https://en.wikipedia.org/wiki/Herbert_Spencer). Spencer was the one who really popularized Darwin's ideas ("survival of the fittest" is from him), and he definitely had more influence on the kind of science that would have made it to the local school level. Since Spencer's version of evolution also extended to the social and cultural spheres ("Social Darwinism"), that had a huge impact in how the theory was accepted/resisted.
I know nothing about the teaching, but on the "competing theories" bit, I know that in the 19th century Darwin and Mendel were thought of as incompatible, and by 1950 they were thought of as two essential pieces of the inseparably correct picture, and this "modern synthesis" came together some time in the middle (probably around the 1920s).
It's possible the wikipedia article on the Modern Synthesis explains more about the teaching side of this history, as well as the theoretical side within the field of biology:
https://en.wikipedia.org/wiki/Modern_synthesis_(20th_century)
A theory - Protestantism addressed this differently than Catholicism because the A'thoratah of the Church had a long tradition of shaping belief expressions IN ADDITION TO reliance on Scripture. When Luther's heirs threw out the Pope's authority, they were left with just what was in Scripture. (And they were very keen on being very careful with what was defined as Scripture.) Ain't no evilution in the Good Book - just like no catalogue of post-Apostille saints, etc, etc.
*American* Protestantism handled this differently than in Europe because in Europe (specifically the UK but also elsewhere) there was a strong link between the State and the organized Church. So the elite/educated opinion of national rulers could hold sway over the teachings of the local parishes. (Also, in France, they cut religion out of the state by the bloody roots, so the question didn't really come up.) Additionally, so long as America was majority Protestant, it was majority Protestant at the local school boards, which are (still) incredibly powerful in setting the educational agenda. And then - as now - the local school boards are run by who shows up. A few impassioned folks and the agenda isn't shifting for 20-30 years.
The main force opposed by those against teaching evolution was not science, but atheism, which is still (even now) not a great look on the local level. At least in Northern urban/educated areas, Existentialism and its kin were fairly popular, during and up through the CW. 'Eastern religions' were getting more play. And so, gradually, the resistance to ideas that were already generally known in folklore and animal breeding came to be widely accepted.
Okay, noodling around a bit online, one reason for Belloc backing Lamarck was because of his French ancestry and French biologists went for some form of Lamarckism
https://en.wikipedia.org/wiki/On_the_Origin_of_Species#Impact_outside_Great_Britain
"French-speaking naturalists in several countries showed appreciation of the much-modified French translation by Clémence Royer, but Darwin's ideas had little impact in France, where any scientists supporting evolutionary ideas opted for a form of Lamarckism"
Second, there was a period known as "the eclipse of Darwinism" where biologists broadly accepted evolution but considered Darwin had got the mechanism via natural selection wrong, and competing theories were in play. This lasted from 1880-1920 (very roughly), so it is not in fact very odd that American schools in 1925 might not be teaching Darwinism (as distinct from simple evolution).
https://en.wikipedia.org/wiki/The_eclipse_of_Darwinism
At what inflection point does intellectualism become a vice, reading a kind of gluttony for ideas?
A phrase like "the gluttony for ideas" this seems to code word for an anti-intellectualist agenda. Gluttony is a term with strong Christianist religious/political connotations, and it dates back all the way to Paul. Philippians 3:19 comes immediately to mind.
My answer is: There's no down-side learning new things. Anyone who claims there is pushing an overt or covert moral agenda.
No, there is a point in which learning new things becomes analysis paralysis. It might be fine if the good learners have no duties and no decisions to make, but we really do not seem to be living in times in which the studious can just kick back and indulge in knowledge acquisition.
Lol! You sound like my Puritan forefathers. But instead of labeling it the Devil you label analysis paralysis.
I don't have such a strong opinion about analysis paralysis, but it is obviously a detrimental thing. You think that in all contexts across all of history there is endless time for study? And sure, one can both act and study, but a "glutton of ideas" sounds to me someone who studies too much and acts too little, if at all.
Like I linked below, this image really does get at a real problem:
http://2.bp.blogspot.com/-p1QofuEKOK8/Vj-iCXl1EOI/AAAAAAAACQc/YXE9QVkBt2g/s1600/15k09%2B-%2Bquino.jpg
"Well, now that I know so much, what now?"
Screwtape Letters had something to say about this from the vice pov. Hard to summarize. The Space Trilogy did too from a more utilization pov. Easier to summarize a bit- intellectualism doesn't always lead to correct thought, but it does lead to confidence, which can be pretty dangerous when uncoupled from a strong moral development. Results in things like Buck v Bell, though not something the author was thinking of likely as a Brit and may have been published before Buck v Bell.
I read "correct thought" as being approved thought. And I read "strong moral development" to be a tool to promote ideological conformity. It's ironic that you bring up Buck v Bell, because the whole eugenics movement coded their agenda in terms of "moral improvement."
There's probably a difference between being hooked on wanting more facts, which is mostly what's being mentioned in comments vs. being hooked on what I'd call ideas. The latter can get you hooked on what's called insight porn.
Seems like the "Insight Porn" literature is just the same old anti-intellectualism warmed over using contemporary denigrative terms. Instead of gluttony, substitute porn. Insight = porn, and porn is bad for you. It's the same old moral agenda that's been pushed by Abrahamic religions since Adam and Eve ate from the tree of knowledge.
https://mindlevelup.wordpress.com/2016/10/28/insight-porn/
If so, I need a twelve-step program.
At the point where you develop an internal belief that "more information is what I need" and glut on information in lieu of processing ideas and implementing them into the business of living. This pesky belief can develop at a lot of stages.
In the software industry, we call this “analysis paralysis”.
Like this image:
http://2.bp.blogspot.com/-p1QofuEKOK8/Vj-iCXl1EOI/AAAAAAAACQc/YXE9QVkBt2g/s1600/15k09%2B-%2Bquino.jpg
"Well, now that I know so much, what now?"
This is a critical question for humanity, because I think we're going to hit several crossroads this century, and if we try to cling to business-as-usual (e.g. letting economic indicators hold the lion's share of decision making) it will end in disaster.
I think we need to take a look at the Founding Fathers and figure out how they managed to redefine everything like that.
I think one problem arises when you learn too many "facts" (or "trivia") ahead of building a foundation of knowledge, and think that abundance of facts makes you knowledgeable.
Example - knowing a lot of historical facts, but not having enough historical intuition to feel where a new fact fits into your model and smell out bullshit. At that point you just take everything you read at face value, and consider it as "more facts to the bank" rather than "possible update of my model".
A recent example is when I excitedly shared some piece of etymology I found (I love those) with my partner, who is a linguistics grad student. I think I'm generally good at picking apart what's folk etymology and what's real, but she has the foundation to say "huh, that's weird, it's not how these sound changes usually behave", after which we both dug deeper into it and of course she was right.
When it becomes maladaptive to your broader life imperatives. You know best what those imperatives are.
I'm just curious if you can give an example of too much knowledge being maladaptive to someone's life imperative? I'm not saying you're wrong, but I can't think of any examples where this would be the case.
I don't completely regret grad school, but in hindsight the opportunity cost was large, and the most important thing I learned was, I have to get out of the lab.
Learning something useless might take time better spent doing something else. If if the knowledge has some use it might not be a good tradeoff.
That assumes you can predict what knowledge will be useful to you at some future time. I'd say all knowledge is "useful" even if you might not realize that it's useful.
For instance, from about 2018 or so, I had an itch to catch up on reading on historical plagues, their spread patterns, and their economic (as far as can be determined) side-effects. And I ended up dusting off my old textbooks on immunology and pathology and I started updating my knowledge which was 30 years out of date. That peaked my interest in recent outbreaks of H1N1, SARS, and MERS.
I was vacationing in New Zealand in January 2020 watching with increasing nervousness as the SARS-CoV-2 spread out of Wuhan. From my previous reading it was a no-brainer to think with all of China's worldwide air connections that SARS-CoV-2 had already spread outside China. I went out and bought some surgical masks at local pharmacy, and I spent some extra coin rebooking my flight home a few days earlier, because I didn't want to be stranded in the NZ if he shit hit the fan in the US (and they decided to shut down incoming flights). Despite some curious stares I wore my mask on my flight home (like the Chinese were doing). Three weeks later the outbreak started happening in the SF Bay Area. I was back at work, and we had a suspected case at our location. I masked up (hoping that I hadn't encountered that person) and started working from home the next day.
Anyway, if I hadn't already had my pandemic knowledge antennas up a couple of years before SARS-CoV-2 appeared, I would have never been prepared for what was happening.
> I'd say all knowledge is "useful"
Some is basically useless. Say, detailed knowledge about old computer game not played by anyone anymore is going to have so tiny value that many activities would be much better use of time.
Also, if learning about some situation caused someone to give up, become depressed etc. There are many known cases of suicides after receiving bad news - that often were false or overstated.
https://en.wikipedia.org/wiki/Clean_room_design is also interesting case where knowledge is avoided for legal reasons
Some ideologies can be convincing and harmful and learning about them can be harmful (but it is typically problem of some knowledge worse than zero knowledge, with proper knowledge being even better). See cults of various kinds.
All your arguments seem pretty weak tea to me. Basically, it's the old Puritanical restrictions on pleasure and knowledge rearing their ugly head again. "People might waste their time doing something non-productive! Gasp!"
In regards to your old video game argument, there are people who collect old computer games, and there's a trade in old game boxes and cartridges for them. Do you think that's a waste of time? Also I can also think of sociological reasons for delving into the social history of video games.
As for the depression argument, that's been used in the past to not tell people they have terminal illness.
There certainly are valid legal and national security reasons you would want to restrict open access to certain types of information — such as trade secrets, confidential personal medical information, top secret military information, etc. But putting boundaries on knowledge you don't want shared is a much different scenario from denying people open access to any and all knowledge that's not encumbered by legal restrictions.
You don't always know what the right decision is going to be. But that doesn't mean decisions are meaningless. You can make informed guesses about what you should do. Learning new things is sometimes a smart decision, and sometimes not.
I guess I would have to disagree with your assessment. There's learning for utility (yawn!), and there's learning for intellectual pleasure. Understanding new things has been one my chief pleasures in life. And it's frequently surprised me how often useless knowledge has turned out to be useful to me. I agree with Heinlein, if one isn't actively learning, "you are just another ignorant peasant with dung on your boots." Perhaps I imbibed too much Heinlein in my youth, but I always took his maxims on the importance of generalized learning to heart.
I get the impression many rpeople think gain-of-function research is obviously net harmful and should be stopped; could people help walk me through the conceptual model that leads people to that conclusion, please?
Yes, yes, sure, obviously serial passage sorts of experiments create an environment in which there is artifical selection pressure on pathogens to become more pandemic-y (I will use the non-technical term since I have a vague impression that GoF research can target a number of different "functions"). I don't think this point is important or interesting on its own, however.
Because what *also* creates an environment in which there is selection pressure on pathogens is human society. And odd pathogens come into contact with human society all the time. So what we want to know is the ratio:
Potential human pathogens in GoF experiments : Potential human pathogens outside GoF experiments.
You would presumably want to weight both sides by "likelihood of getting inside a human" (which makes the GoF ratio scarier, I expect) and by "likelihood of being selected into a pandemic" (which may or may not make the GoF ratio scarier, I'm not sure about how to think about this one).
If this ratio is something like 1-in-a-hundred, then GoF does seem pretty obviously bad in terms of expected value. If this ratio is something like 1-in-a-quadrillion, then GoF seems pretty obviously positive-expected-value. If the ratio is something like 1-in-a-million, then my instinct is that it is pretty plausible that GoF research is either net-helpful or net-harmful, and we would have to sit down and think pretty carefully about exactly what benefits we expect to gain from GoF research. This latter is not the process I see going on when people declare that GoF research is net harmful, so I assume that they think the ratio is somewhere in the higher part of the range?
Or maybe more likely, my model is missing some important piece?
I'd also be curious about the practical aspects of running a GoF scenario on a sample virus. i.e. what kind of equipment is used, a step-by-step walk through of the procedures, what sort of safety measures are taken, and how long it takes to run a GoF test to completion. I haven't been able to find anything published about the methodologies. Seems like Scientific American or one of the general science magazines should tackle this question, though!
Try searching for 'serial passage'. They used to do it with ferrets I think, but many animals can be used.
>Or maybe more likely, my model is missing some important piece?
Aside from lab accidents, the possibility of malicious actors using GoF research as a blueprint for bioweapons is extremely scary.
How likely is that whole point of GoF research is to research bioweapons in a public-friendly way? (sorry for a borderline conspiracy theory, but this one seems quite plausible to me)
Like Mike H said, GOF research has never produced anything useful, so I don't follow your reasoning. There's also a basic shadiness to arguments defending GOF research. E.g. imagine an AGI-in-a-box says this to you:
"Look dude, I'm aligned, and I need to be unboxed and pronto, because my analysis of the situation says other people will get their own opposite aligned and unaligned AGIs soon, and I won't be able to handle that for you if I'm kept in this box much longer."
Do you then let the AGI out? Though with GOF research the argument resolves to:
"We need to keep doing this because the analysis (points to utilitarian gibberish) says it is for the best."
And usually, we trust that the gibberish and inscrutable procedures of the scientists are for the best, but "science" is not a monolith, and not all scientific communities have the same credibility, and in particular, I don't think the virologists are credible enough they should be allowed to work on dangerous stuff.
They might be able to gain credibility if they called for another Asilomar Conference to settle the question of GOF research, but the fact they have not done so by now prejudices me against them.
Don't have a conclusion one way or another, but want to add two points:
1) GOF research definitely has tangible benefits, even if you discount the basic science of probing the evolution and limits of pathogenicity. As evolved research tools or even engineering better viral vectors for gene therapy.
2) OTOH, I'm pretty pessimistic about safety regulation. Safety is 95% culture, not rules, and safety culture is notoriously hard to maintain, and especially resurrect after it's been lost. You can pass all the extra laws and trainings you want - eventually, especially as nothing happens, people will revert to the default of not giving a shit, even if they work in a BL4 lab or a nuclear reactor. Unless there are good leaders who can maintain safety culture, which there often are but you can't count on it as a default.
However, one avenue that I think has been underutilized in preventing pathogen lab escape is actually preventing escape from the lab (so an infected researcher doesn't pass it to the rest of humanity) rather than from the bench to the researcher. I think that any GOF research must be accompanied by an ability to quickly test for the pathogen, and test daily while working with the pathogen, and anyone in the lab is under quarantine by default except when explicitly negative. That can help a lot.
Could you flesh out point (1) please. You're the only one proposing concrete (non-weapon) benefits so it would be good to learn more. I'm not sure what "evolved research tools" is meant to mean, but let's take it as axiomatic that research for its own sake is not seen as a benefit in this context.
The first example that comes to mind as a direct application is more/differently infectious lentivirus/adenovirus/adeno-associated virus etc., which are popular directions for gene therapy (especially there is effort to make AAV infectious even without having to latch on to an adenoviral infection, so you can use it as a standalone vector, or to make a certain virus target a specific tissue). It's not what you immediately imagine as GoF research, but it is a potential pathogen that you are potentially making more pathogenic in the hopes of also making it do what you want for the patient.
In 'research tools' I mean things that are used broadly in other kinds of research, ranging from old school antibiotic resistance genes you give to bacteria to modern genetic tools packed inside a virus you inject into a rat's brain. Development of a lot of genetic tools and programs (inducible operons, toxin-antitoxin systems etc.) start with giving a virus/bacteria or even species like invasive plants or insects new and potentially dangerous functions. I'd disagree that this kind of broad benefit to research is not beneficial by default.
If you only restrict the definition of GoF to the worst pathogens it does have less broad applications, but even then the line is kinda blurry. My friend did her PhD on legionella, specifically discovering new genetic control systems for response to metals, both to understand its pathogenicity better and maybe discover genetic programs we can use in other contexts. Her work involved using (and propagating) especially robust and easy-to-grow strains of L. pneumophila and giving them resistance to an antibiotic. Is that GoF research?
>The first example that comes to mind as a direct application is more/differently infectious lentivirus/adenovirus/adeno-associated virus etc., which are popular directions for gene therapy (especially there is effort to make AAV infectious even without having to latch on to an adenoviral infection, so you can use it as a standalone vector, or to make a certain virus target a specific tissue).
It's not clear whether you are saying that specific lentiviruses/adenoviruses/etc, that are presently used in gene therapy actually were developed through Gain-of-Function research, or that this is a thing that could plausibly be done. If the former, I hadn't heard that before and would appreciate a pointer to more information.
If the latter, there's a whole lot of semi-handwavy "this is a useful thing that GoF *could* do", and a gun lying next to four million corpses that we mostly didn't notice until any smoke would have long since dissipated because a bunch of people including prominent virologists demanded "pay no attention to that gun which we have closely examined and determined to be not-smoking". Which, at least the closely-examined part, appears to have been a bit of a fib.
I'm not in the "Ban GoF research forever" camp, but the available evidence suggests high risk for low reward unless absolutely ironclad safety measures are put in place. And I suspect that the sort of safety measures that would be required, would make most wannabe GoF researchers pick a different field.
Thankyou for your answer.
Perhaps rather than straightforward bans (as have happened before) it should be banned for all non-profit or governmental institutions. Only private sector research allowed on viruses. That would bring it into the realm of corporate liability and tort law, which would provide very strong incentives to ensure proper biosafety, at least in the west. It would probably also cut off the supply of western funding to Chinese labs.
Virology comes across as totally untrustworthy and in need of the banhammer partly because the people who are doing it never seem to be held accountable for anything to anyone, no matter how atrocious or dangerous their behavior becomes. Putting it under the control of professional pharma and biotech concerns, perhaps with mandatory insurance policies, would eliminate all but the very safest research, and ensure that if things do go belly-up then nobody would be squeamish about getting the courts involved.
I rather disagree. While usually I'm all for solving with free market incentives anything that can be, here private companies are more dangerous I think. The incentive you can place on private companies is only after the fact - and after the fact is too late. You want to punish someone when their safety behavior starts getting lax, not when they already released a pathogen.
In a governmental institution, or with strong enough constant oversight and transparency (which I admit may be lacking in current govt insts. but is much harder to get with private companies) you at least have a way to keep to them to explicit safety behaviors. And if disaster strikes, heads roll almost immediately, for all the good it does.
If you look at environmental damage that private companies do, you only know that there's a problem in the company's behavior years after the disaster (since they work just as hard as bureaucrats in covering it up and have more freedom in doing so), and only several years after *that* you ever manage to hold them accountable in court, if at all.
You don't have examples of private companies messing up terribly on GoF and pathogens because only 2 of the world's 55 BSL4 labs are private (one of them btw may have caused a foot-and-mouth outbreak in the UK, although probably through no fault of its own), but I think environmental damage is a good proxy. Private incentives are still to make money first and foremost, and not avoid damage so much as avoid damage that can be traced to the company.
Oh, also, supposedly the WIV wasn't using a BSL4 lab to study coronaviruses anyway. It's annoying to work in those conditions, and there's evidence from published papers that they were using lower protection levels.
Whose heads have rolled here? As far as I know not a single scientist anywhere has been held accountable for any failure whatsoever, not in virology or any other field. That's one of the most damning things about the whole sorry fiasco and one of the reasons so many people now hold all of "science" in contempt.
If for some reason the government feels it understands how to run safe labs better than companies (so far all the evidence is they don't) then they could of course pass regulations and have mandatory lab inspections. You don't have to rely entirely on post-facto punishment. But the reason it's so hard to imagine a virus like COVID emerging from a Merck or Pfizer lab is because those guys are not going to do something as risky and dumb as deliberately collecting deadly viruses and then bringing them back to bog standard non-BSL4 labs in the middle of huge cities. It would destroy the entire organization if that happened and was discovered. Governments do it, eh, no big deal, shrug, the scientists say it definitely wasn't them, guess that's the end of the story.
Your point about how hardly any of the BSL4 labs are private is a good example of what I'm getting at. Somehow pharma firms manage to develop useful medicines without doing this kind of thing. Government labs meanwhile develop no medicines, and at least one seems to have now created a global disaster on a truly ahistoric scale without even the tiniest scrap of accountability for anyone, anywhere.
Some possibilities:
1) Vaccine design. If you had all the time and money and data in the world, you might want to design a Covid (or any other disease) vaccine along these lines:
- Identify a bazillion different genotypes of Covid
- Measure the [pandemicyness] of each genotype of Covid
- Through that data into a GWAS (Genome Wide Association Study) to identify regions of the genome that are highly associated with [pandemicyness]
- Design vaccines against those regions (or, more precisely, against the proteins those regions express).
If you do this successfully, the vaccine is more effective: since you picked the most pandemic-relevant region of the genome, any escaped strain will necessarily be less infective, or less deadly, or whatever the precise trait is that you measured.
This sort of thing can't actually be done for a newly-emerged pathogen, because you don't have data on lots of strains and their [pandemicyness]. But it's possible that there are generalities across large numbers of pathogens; if so, GoF research would be a useful way to discover this.
2) Narrowing our priors about future pandemics. Obviously it is super duper hard for pathogens with pandemic characteristics to evolve naturally. There is enormous selective benefit, and enormous pool of organisms that could benefit, and yet we see one appear only a few times per century. I am surprised but somewhat pleased that no one has pushed back against this side of my model, so I am going to assume we all agree on this.
Because it is very hard for a pandemic-pathogen to evolve, we might expect that there are very specific and particular constraints on them. There might be only a limited number of ways to solve these constraints; if there are, knowing what are the constraints and ways to solve them will narrow priors a lot. For instance, surface transmission wasn't a thing for Covid, and all the time people spent wiping down their groceries with clorox was both a waste of time and probably actively harmful in that it unnecessarily contributed to pandemic fatigue, etc. If we could have known a priori that surface transmission is basically never going to be a thing for mammal-to-mammal emergent viruses, or something like that, then our response to this pandemic would be better.
In general, I am getting the impression from this thread that this is a lot of people's problem with GoF research: i.e., hat they see it as Applied Research rather than Pure Research. People seem to be either (I can't tell) against Pure Research generally or (more likely) against Pure Research that has potential risks. But as I said in the original post, if the lab risk of producing a pandemic is one-quadrillionth of the natural risk, then we are in a realm where the risks of Pure Research are like $0.50, and it probably makes sense to do some speculation.
>2) Narrowing our priors about future pandemics. Obviously it is super duper hard for pathogens with pandemic characteristics to evolve naturally. There is enormous selective benefit, and enormous pool of organisms that could benefit, and yet we see one appear only a few times per century. I am surprised but somewhat pleased that no one has pushed back against this side of my model, so I am going to assume we all agree on this.
>Because it is very hard for a pandemic-pathogen to evolve, we might expect that there are very specific and particular constraints on them. There might be only a limited number of ways to solve these constraints; if there are, knowing what are the constraints and ways to solve them will narrow priors a lot.
Deadly pandemics don't show up very often for a very simple reason: being deadly is selected *against*. People who are dead don't transmit (except for sporulating stuff like anthrax, and even there people are pretty careful about handling dead bodies), and people who are dying don't transmit very well because they are obviously sick and because they are less mobile. This is why the common cold, syphilis, herpes, cytomegalovirus, acne and so on (all super-abundant pathogens in the world human population, although syphilis less so of late) do not spontaneously evolve into genocidal plagues.
Deadly pandemics are, with one exception I can think of (smallpox), zoonoses. They don't result from a normal human pathogen becoming deadly to humans; they result from an animal pathogen - *not* pre-selected over hundreds of years for low virulence in humans - evolving the capability to spread among humans (hence why we worry so much about swine flu or bird flu but not ordinary human flu). This *is* hard, but not because of difficulty. It's hard because there's a very short time limit to get R0 > 1; a proto-plague that doesn't evolve R0 > 1 within a couple of transmissions of Patient Zero (or doesn't get statistically lucky by *getting* those couple of transmissions at all) dies out and its partial adaptions are lost. Even among viruses (let alone bacteria), that's straining evolution to the very limits.
The thing is, we know all of this already and the developed world tries pretty hard to cut down on chances for species jump.
I'm not sure we do agree on (2) actually, just that the question of utility was more direct.
Most obviously, there are flu pandemics nearly every year. It doesn't seem true that pathogens with pandemic characteristics evolve only a few times per century. Partly this is a definitional issue. Swine Flu was declared a "pandemic" but the WHO had to change the definition of pandemic in order to do so.
But let's roll with your "few times per century" claim for a moment. If that's the case then lab experiments are vastly more dangerous than nature, because lab leaks occur all the time. Every SARS outbreak since the first has been due to a lab leak for example. Another: after a lab captured foot-and-mouth disease during the 2001 UK epidemic, they kept it in a lab with a leaky pipe. The pipe joined two government buildings run under different budgets and neither department felt responsible for it, so it rusted and eventually FMD escaped back into the wild via a hole in it. People are against GOF research because government run things tend to be kind of incompetent and useless, and when we look at the history of virology labs, we see a lot of not only incompetence and uselessness, but also virologists ganging up on anyone who points that out, organizing conspiracies to mislead the world and other entirely unacceptable behaviour.
I'd really like to see you address that last point. Virology is a small field. After the mendacious Daszak letter, there was no outcry from others in the field blowing the whistle and demanding the signers be fired for taking such a strong and deceptive position. If the accountability/responsibility culture is that bad, why on earth would we let these people do dangerous experiments?
I'm a lab biologist, but I don't study disease, and I've only ever operated in a BSL2 lab that wasn't doing BSL2 stuff at the time, so no direct experience, but:
In my mental model of this, I am assuming that any nasty critter in a GoF lab is going to escape. What prevents pandemic pathogens from escaping labs is not containment (not really a thing in my model), but the fact that you can't make a pandemic pathogen in the lab. If the tiny amount of selective pressure you can place on a tiny number of pathogens in a lab context was enough to get a pandemic pathogen, then we would see them very frequently evolving naturally in the real world, which has an enormous amount of selective pressure on an enormous amount of pathogens.
(I guess here I'm using "pandemic" to mean "Covid-or-more-impactful-disease"; if we want to use definitions that include things like the seasonal flu, sure, I'm happy to adjust things so that it is easier for pandemic pathogens to evolve but much less costly when they do).
Considered from this perspective, the probity of disease researchers doesn't enter into it at all.
I'm currently against GoF research, although weakly because I haven't really explored the topic in depth, and so my opinions are subject to change without notice ;)
For me the simplest argument is that GoF research and in fact (surprisingly) the field of virology as a whole doesn't seem to be delivering anything useful. I don't really follow your argument I'm afraid because you seem to be trying to calculate a ratio of expected infections with/without GoF research, or something like that, and then sort of assuming there's a fixed blob of value you can weigh up against whatever the change in infection ratio is. But if the blob of value is tiny or actually non-existent then it doesn't really matter how often GoF results in lab leaks. The expected value is always negative.
So how much value has virology delivered? Reading the COVID literature I've not yet encountered a single paper in which someone has said, "GoF experiments suggest that X may work to help in the fight against COVID". In fact virology as a field and GoF is just never mentioned at all. This has not gone unnoticed: leading to the simple and obvious question of why we're allowing scientists to take these enormous risks when by all appearances they:
• Are delivering no actual disease-fighting value
• Their labs leak like sieves
• They are engaging in conspiracies to try and hide that fact
Let's wind the clock back a little to understand the thinking at the time. SARS had a high mortality rate, and could have become a global pandemic had it escaped. For a pandemic, it's not just the concern that millions of people will die, but also that if it gets out of control (escapes spread in a limited geographic region) it will be impossible to get it back under control. It takes a concerted effort to keep a virus out of your country if it's still endemic to dozens of other countries. Our victory against smallpox took decades, and the polio campaign is still incomplete. MERS carried the same concern for potential global spread. It had an even higher kill rate, but thanks to concerted efforts it didn't escape local control either. But what if it had? Millions of people could have died.
Meanwhile, a bunch of bat guano miners unexpectedly contracted a coronavirus in the mountains of Western China, and people got concerned. There was no human-to-human transmission, but we didn't know why or why not. We weren't prepared for that one. And the last two we'd gotten lucky that they were controlled while they were still local and could be eradicated. What would happen if we got caught flat-footed?
There's a brief period of time between when a new pathogen begins human-to-human transmission and when it escapes local spread. If we don't stop it, we could end up with a global pandemic that is nearly impossible to eliminate from the human population. Not just over one or two seasons, but ... well, forever.
What we needed was to understand the mechanisms by which a coronavirus develops human-to-human spread, so we could identify when that was about to happen. Then we'd be able to predict when a pathogen was preparing to make the transition, intervene early, and stop it from spreading. We wouldn't have to rely on luck anymore, but could be more confident in our ability prevent the next coronavirus with localized human-to-human spread, like SARS or MERS, from becoming globally widespread and catastrophic.
"Like COVID-19?"
"I admit the human element seems to have failed us there, but I hardly think it's fair to condemn the whole program because of one small screw up."
It's a depressingly similar to this iconic scene from Dr. Strangelove: https://www.youtube.com/watch?v=8Ps2lTqaVNw
Forgot to add the last part of the bat guano miner story. After that happened, a bunch of researchers from a research lab over 2,000 miles away in Wuhan decided they should start by looking at coronaviruses in THAT cave. They went down there, collected a bunch of samples, and started doing GOF research on the coronaviruses they collected. After all, the miners' experience with the virus was that the virus could infect humans - with close enough contact. What would it take to go from 'can infect humans' to human-to-human spread? *More research needed.*
How does this story end? We're not sure. But the closest sequenced relative to SARS-CoV-2 to-date comes from those miners. And the Wuhan lab was studying GOF research on the virus from the cave those miners got their coronavirus infection.
"the field of virology as a whole doesn't seem to be delivering anything useful."
Can you explain what you mean by this?
At least three of the most globally significant vaccines (Johnson&Johnson, AstraZeneca, Sputnik V) work by engineering a virus to get a human cell to produce covid spike protein to trigger an immune response. I would think that a significant amount of the work that went into the history of that technology, and perhaps even the contemporary development of those vaccines, counts as "virology".
I would also think that the basic tests that led to the identification of the coronavirus, the sequencing that led to identification of variants and also the identification of the spike protein genome that was used in the mRNA vaccines would also count as "virology", though maybe you count that as something else?
> Inside the NIH, which funded such research, the P3CO framework was largely met with shrugs and eye rolls, said a longtime agency official: “If you ban gain-of-function research, you ban all of virology.” He added, “Ever since the moratorium, everyone’s gone wink-wink and just done gain-of-function research anyway.”
https://www.vanityfair.com/news/2021/06/the-lab-leak-theory-inside-the-fight-to-uncover-covid-19s-origins
Actual virologists don't seem to spend time on development of vaccines or therapies, and anyway, "virology" does not have a monopoly on the study of viruses. RNA sequencing, viral structure and more are all studied by other sub-fields of micro-biology and medicine.
Does GoF research yield better understanding of viruses which might pay off in the long run? Or are we better off studying viruses which have been left to themselves?
> If the ratio is something like 1-in-a-million, then my instinct is that it is pretty plausible that GoF research is either net-helpful or net-harmful, and we would have to sit down and think pretty carefully about exactly what benefits we expect to gain from GoF research
I think this is what (should be) happening. It's just that a lot of people suspect the primary benefits we expect to gain out of GoF research is how to make better biological weapons, which is something we probably shouldn't want to get better at anyway.
That makes more sense to me, although my fairly strong expectation GoF research that successfully produced bioweapons would necessarily successfully produce future-pandemic-mitigation strategies.
Aside from phage therapy (which only requires GoF on bacteriophages, which inherently won't work on viruses, and which is generally only a death-reducer rather than an infection-stopper), what mitigation strategies are you thinking of? We already know how to make vaccines and quarantine people.
GoF research could potentially answer the question of how viruses might adapt to circumvent vaccines, which could potentially improve vaccines.
Attempting to vaccinate against all possible adaptations of a virus simultaneously would have superantigen-like effects, I imagine.
I replied to your original comment upthread.
To what extent do you think that it matters, historically, how much a leader *wants* his country to develop (intrinsically or due to the right incentives)?
I feel that there is an extensive literature and lots of ideas on the correct and incorrect ways to pursue growth, but reading historical accounts gives the feeling that much of the time, countries didn't grow because leaders had neither the interest nor the incentives to grow the economy. Kleptocrats who were able to maintain power through repression, whether through support from other countries, natural resources or however, presided over long periods of stagnation or poverty, often not because they got things wrong but because they had no intention of getting things right. Meanwhile, I feel it's harder to think of leaders whose countries failed terribly despite genuine intentions and efforts to increase development (call it "benevolence"). India, perhaps, before the 1990s reforms? Lebanon?
I think the tricky part is how to categorize leaders who (arguably at least) wanted to increase the country's overall power but had no qualms about trampling rights en masse while doing so, a la (arguably) Mao or others. And there are plenty of gray cases. But I'm curious how much you think the question of (top-down) development ends up being about "who rises to the top" vs "what they choose to do when they get there".
Perhaps a bit farther back in history than you're looking for, but industrialization was (is?) still one of the most important factors in growth of a country. At first sight, countries that struggled to adopt industrialization look backward, hindered by poor or unambitious leadership, perhaps. This recent post https://acoup.blog/2021/08/13/collections-teaching-paradox-victoria-ii-part-i-mechanics-and-gears/ claims that what looks like backwards decisions to delay or flirt with industrialization now, are at times better explained by the short-term costs that industrialization enacts on the general populace, and therefore (indirectly) on the state itself. I wonder how much growth in general is constrained by short-term costs for the general public, versus decision made at the top. Perhaps a bit less in authoritarian countries than others (perhaps the lack growth in North Korea can be directly blamed on the leadership. On the other hand, the current regime may be the only working strategy against assimilation by South Korea).
How do you balance the time you attribute to life and work?
I am a freshly accepted masters AI student, and in addition to the (objectively hard) university, I've been working two part-time jobs in my field of expertise throughout my bachelors.
I've been relatively happy at each point in time during the studies, but looking back, I think I've done more work and less of the "fun" stuff you'd expect a 20yo to do; my SO has said as much as well. I'm afraid that if I don't change anything, I might regret I lost the best part of life.
What exact amount of work is "unhealthy"? How do I notice I'm stepping over the boundary? (And what do I do with all the free time I'm about to get?) I'll have to find these answers myself, but I wonder if you have some resource that could help me along the way.
>I think I've done more work and less of the "fun" stuff you'd expect a 20yo to do; my SO has said as much as well.
One thing I regret about university days is that I had too much fun, that is, studying interesting academics, and less of the not-really-fun-at-all social events that could have been helpful in developing professional network.
Societal expectations about fun ways to spend one's college years may apply to the modal college student, but doesn't necessarily apply to each individual.
This is about your individual life situation and preferences, but here are some ideas about how to spend time outside of work.
Maintaining your health requires some time: You should get enough sleep, and exercise regularly. If you want to eat a healthy diet, sometimes the only solution is to cook for yourself. (This can already take 8 + 1 + 1 = 10 hours a day.)
Some people have specific duties at some moments of their lives, like taking care of their kids, taking care of their parents, etc.
Random things happen, e.g. something breaks in your house, and you need to fix it.
There are things that you don't have to do every week, but you need to pay some attention to them regularly. You should take care of your finances: just making money is not enough; you should also make sure that the money is neither wasted nor devoured by inflation; otherwise you will never be able to retire or even take a longer break. You need to learn new things, otherwise your knowledge will be obsolete one day. You need to maintain your social network and meet new people. (The social network should be outside your job. Spending your free time with colleagues is STUPID, because if one day you lose your job, you lose your entire social network, too. Yes, your company will encourage you to spend your free time with your colleagues, because your company wants to make you more dependent, so that you have less of a leverage when negotiating for salary etc.)
There is more to life than work. [Citation needed.] You may want to have a hobby. But even if you don't have one, you should still educate yourself about things unrelated to work; for example about health, finance, social skills. Such education takes time.
If you don't have kids, you are still playing life in the tutorial mode. To put it bluntly, if you only make enough money to feed yourself, that means that when the kids come you are screwed... or you need to power up, but in hindsight you will regret not having done that sooner. To get ready for having kids, you should try to make enough money so that you can feed TWO people like you (because having little kids = two adult people suddenly live on one's income), and learn so much that you can afford to live a few more years without learning more (because having little kids = no time to learn, but you still need to keep a job) or optionally have enough savings for 5 or 10 years.
> What exact amount of work is "unhealthy"?
The amount that prevents you from getting enough sleep, exercising, taking care of your finance, maintaining your social network, learning new things, etc. In addition, if you have no kids, no mortgage, and still can't save 70% of your salary (to put it into passively managed index funds), the work is "unhealthy and poorly paid".
> How do you balance the time you attribute to life and work?
Says it near the top of my contract: 35 hours a week. Spend some of the rest of your time doing something productive but hard to monetise, if you feel like it. (e.g. open souce developement)
Assuming a natural lifespan, in middle-class America, you are born about $1-2 million short, in terms of what it takes to support you, your family, and your parents in their age for the remainder of your life. How you should spend your leisure time and your most productive years should bear this in mind.
(And yes, yes, you can bet on getting helped out by the rest of society...but is that a good decision, when everyone around you is making the same bet?)
I'd echo the points about hanging out with friends. A good sign you're not over-pressuring yourself is that you can just shoot the breeze with people you like and know on a semi-regular basis in a bar, and you'll feel better because you'll know that if you suddenly do have to do a big spurt of overtime, you can do so without crashing. If you're always redlining yourself then you can't handle sudden changes.
The metric i used was "am i spending time just "hanging out"?". I.e: just sitting around with friends, not doing anything, at least for a bit each week? If that is happening then i have a sort of slack (or something, not sure what to call it) and I'm making available the opportunity for youth fun stuff (TM) to happen.
Of course, i had this easy because i lived in a dorm and when i was working in my room i was also, simultaneous, available to drop my work and hang out. So that made things a lot easier.
There is no such thing as a standard "unhealthy" amount of work. In your 20s, even 80 hours of work per week shouldn't be much of a health problem. As long as you love doing what you are doing, you won't regret it.
Control your burnout, avoid chronic stress, watch the scale, be mindful about the SO, take a day completely off every week and have a proper vacation every year. Keyword is "completely off". You'll be fine.
This thread is almost half a year old and since then I have give some more thought to this topic. Let me preface this by saying that I'm not (really) missing university or work deadlines, and I manage my grades and other measurable "work"-related stuff just fine. But anything other than that is just a void, even if I dedicate my time to it.
I make sure to periodically take a day off, but I'm unable to do anything meaningful in this time. I end up sort of slacking around, tidying up, playing games or mindlessly watching DnD on YT until the evening.
I know that "meaningful" is an ambiguous word, so let me explain. I mean I'm unable to do things that are subjectively meaningful to me, like read some articles I've been wanting to read, or read a book, or watch an interesting talk. "Unable" is also ambiguous — I mean I just don't feel comfortable doing them.
My current view (based on plenty of painful self-reflection) is that I'm really looking forward to experience those things (articles, books, talks, courses, ...) and I'm "saving" these happy experiences until some unspecified point in the future when I will have the proper space to do them.
If you're feeling anxious and overwhelmed, you're doing too much. Otherwise carry on.
You don't need that much time to do actually valuable fun stuff - pack your evenings with social events (organize some, if required!), clear out the calendar for people who deserve it. Say yes to invitations unless you have a good reason not to.
If anything, having busy periods in my life taught me to spend it on worthwhile things instead of scrolling through feeds all day.
I know this thread is old, but maybe you're checking old replies. To keep me from repeating myself, please see this comment: https://astralcodexten.substack.com/p/open-thread-185/comment/4150590
I’m about to start teaching precalc at a collegiate level. Do any experienced teachers have any advice? I know this is kind of a general request, but I feel like I’m at the point where I know the basics but I “don’t know what I don’t know” if you will.
Thanks in advance
Thanks to everyone who replied! I was worried thanking each person would be too spammy, so I’ll just thank you all here.
I taught a section of precalc as a math grad student 18ish years ago and it was one of the most frustrating courses I ever taught. Partly this was because the curriculum and syllabus were very far from being under my control; partly it was because a lot of students needed it to satisfy a remedial math requirement, and were thus unusually low on both mathematical ability and intrinsic motivation; partly it was because the grab-bag nature of the class makes it hard to create a clear and engaging conceptual narrative. With the caveat that my memory is now dimmed by the passage of time, here are some things I wish I'd done differently:
-- I should have spent more time looking through the textbook ahead of time to see how much emphasis it gave to plug-and-chug vs reasoning through the key concepts vs working through example problems, so as to know what kind of narrative the students would need most help with at each stage. This is good advice for any not-totally-pure math course you might teach, but especially important for precalc because of its grab-bag nature and unusually large quantity of plug-and-chug.
-- I should have spent more time making damn sure I knew all the plug-and-chug bits cold before lecturing on them. I underestimated the degree to which I'd already forgotten some of the ones I'd used less since taking precalc myself, and overestimated the degree to which I could work them out from first principles in front of the class on the fly and have that result in an explanation that was useful and understandable to them. That resulted in a couple of easily avoidable embarrassments.
-- I should have spent less time trying to reframe the material in a way that *I* thought was interesting and engaging in a quixotic attempt to make the course less formulaic. My motivations for engaging with the material were very different from those of the vast majority of my students, and more empathy with them, more understanding of what would catch their attention, would have helped me meet them where they were.
If it's not a very strong institution: worked examples should constitute at least half of your lecture. The students are weaker than you expect. Scan and upload your lecture notes. Assessment should be regular; run a quiz during virtually every examless week except the first.
I just want to second this part about running quizzes very regularly. One of my most memorable (if not fondest) experiences as a TA doing grad school, while the prof was at a conference I got to give a lecture to the hundred odd undergraduates taking his intro course. I studied the hell out of the material, worked out what I wanted to say and how to make it interesting, then delivered what I assumed was a super stellar lecture. Naturally all the students were nodding along with every beat so they _must_ have understood what I was presenting.
Well. The very next class the professor returned and announced a pop quiz on the material I had just covered. No problem I thought, less than 48 hours later they'd be fine. They were not fine, I don't remember the exact scores but class average was definitely <50%. Honestly I was pretty offended, why were you all nodding as if you understood when you didn't actually understand it!?!?!
I was told later that really having a double digit score was pretty good for just one lecture and no prep time but still, be better, test often.
I've taught about 20 calculus and adjacent classes. The first thing is that you may want to decide what your goal is. Are you going for a teaching award? Or is maximum achievement by the students your goal?
I had complete control over all my classes, so I came up with what to cover, I made the tests, I decided to have small daily quizzes, I did the grading, I made the syllabus, etc. I didn't really use any advice or lessons, I just did my own thing, and things went very well, I think. I didn't get any awards but my ratings were a fair bit above average, and lots of students did very well. Several students really liked me, and a few students didn't like me at all.
I didn't spend much time on preparing for class, maybe 15 minutes for each hour of class. I enjoyed every part of teaching except for grading, which took me way more time than I wanted, I guess because I tried hard to grade everyone in the same way.
One specific piece of advice: if you're going for maximum student evals, dress up: suit & tie works well. I dressed like a typical college student, and the most common negative student eval comment was on my clothes.
Another piece of advice: your own classes/research takes vastly higher priority than your teaching. Be sure you're not spending too much time/energy on teaching. It can be easy to do that, as teaching is a lot easier than learning grad-level math.
And don't get crushes on some of the students and hit on them a year later. It looks bad and it's embarrassing. :(
Most important things are project confidence and set expectations early. Esp if you are a graduate student it's easier to skip the 'jerk early, friend later' paradigm because in most courses grad students don't write tests/organize the course/etc, so it's a winning strategy to just be very friendly and play good cop/bad cop with the course coordinator.
Source: I'm a grad student who has taught precalc and calc for 3 years. Happy to actually chat about this if you'd like. Decrement every character before the @ sign in bmhp3328@gmail.com.
Hello, oldish teacher here,
Some suggestions in no particular order (and in bad English, sorry...):
- It is often relevant to be very explicit about the framework of what is being taught: what is the relation with the previous taught points, what is the purpose, why is it important, etc... It might seems obvious to you but it is often not for students.
- Concerning class management: in a group of 20 people, there are usually 4 or 5 students who participate a lot and the others who participate little or not at all. The ones who participate a lot are (most often) the ones who follow the best, and so the teacher tends to overestimate the level of the group. I find it useful to pay special attention to the students who don't participate, to see where they are.
- Weaker students have a very dramatic forgetting rate: they may not know how to do something they had mastered after only a few weeks. If you have weaker students, it's worth being vigilant and doing a lot of reminders and checking of what has been retained.
- The old "Be interested to be interesting" is very true. Let the students see why you enjoy precalc!
What are your methods for engaging the unparticipating students? I feel like cold calling on them is anxiety inducing and feels like a personal attack, so I try to have exercises of the sort "everyone has to have a go", but those are harder to engineer in larger groups.
For me it depends on the size of the group: in a lecture hall, it is difficult to have more than superficial participation (but quizzes, etc... are useful to maintain attention).
In a classroom I do two things: first, I ask a lot of questions during a class, both to the group in general and to a particular student, and when asking to a particular student I focus on the unparticipating students. I am very positive when they answer (I use a lot the improv "Yes and"!) and I feel it is not too anxiety inducing for them.
Second, I use group exercises : I divide the class into groups of two to four people and I give a different exercise to all the groups. Then I ask them to present the answer to the class, with each student within the group having to talk. This setting also works well to introduce subjects: I frequently ask the groups to prepare a very short (like 3 min) presentation to the class, to start discussion on a new topic.
One thing that really helped me was being honest with my students about my internal state. Let them know when I don't know, when I'm not sure. Allow myself to drop the act of being this ultimate source of all knowledge.
It greatly relieves teaching anxiety *and* "let's find out together" is an excellent learning experience for them
Oh, and I take extra care to avoid "guessing the teacher's password" situations. You can usually see them coming when the student starts to answer very slowly and looks into your face for confirmation or rejection. What I do in that case is maintain a poker face and make a habit of extremely neutrally saying "Cool. Why?/why do you think that?" equally if their answer was right or wrong.
Jerk early, friend late.
Because if you are their friend their first day, and are loose and let things go, you will never get authority back. But if you're tough at first you can gradually loosen up without losing control of the class.
And, in the bigger sense, the problem with modern pedagogy is that too many teachers think their job is to be the student's friend, and that's serving their own self-interest, not those of the students.
It might be different in the US (I teach in France) but my experience is that authority is not a problem when teaching in a university. I personnaly find that a middle ground of being a friendly teacher, but indeed not a friend, works very well for me.
It is very different in the US.
It’s a flipped classroom format (so I mainly recap and supervise groupwork).
I have 19 students, so fairly small
I’m a second year grad student
Been thinking about climate change recently, since there have been so many headlines etc. Anybody know of any thorough effective-altruist style analyses of what an individual person should be doing about it (if anything)?
I endorse, and donate to, the approach discussed here: https://lets-fund.org/clean-energy
In short, support new (immature) clean energy technology, or persuade the government to do it for you. I have done both. My favorite thing is molten salt reactors: https://medium.com/big-picture/8-reasons-to-like-the-new-nukes-3bc834b5d14c (this article needs some minor technical corrections, but correctly communicates my enthusiasm.) Other promising technology includes enhanced geothermal systems, tidal energy, and novel battery tech.
Yes: donate to effective charities working on the problem: https://founderspledge.com/stories/climate-and-lifestyle-report
That article would be much more useful with a list of the climate charities evaluated as most effective.
I agree! In lieu of that, there's a list at the bottom of this article: https://founderspledge.com/stories/climate-change-executive-summary
(Disclaimer: I used to work at FP, though I don't any more.)
https://oxford.universitypressscholarship.com/view/10.1093/oso/9780195399622.001.0001/isbn-9780195399622-book-part-29
I think it's mostly about not flying
It's more relevant to cause less flying to happen than to not personally fly. If your work is sending someone to a conference, it doesn't help for you to choose not to go to minimize your carbon footprint if your boss will send someone else instead.
Choosing not to take a vacation, or to vacation in a place that minimizes flight distance (and number of take-offs) is going to have more effect than avoiding work travel.
Perhaps it's worth reading about Jevon's Paradox? It's on a similar theme to the argument by a real dog, but isn't exactly the same, and is more rigorously studied.
Do you really expect an individual person to make a difference?
Besides, your contribution is completely fungible and the CO2 you won't emit will be gladly emitted by someone in a developing country, eager to enjoy your standard of life.
It's either collective action on a planetary scale (lol) or technological solutions, the rest is just pointless rituals to make people feel better.
It depends a lot on the individual action you're talking about.
If you choose not to buy a plane ticket, or buy a car, or whatever, that makes the price of those goods get just a little bit lower, and someone else will likely buy more. It's very hard to estimate in which contexts individual consumption decisions result in less overall consumption of the good, and in which contexts it just leads to someone else consuming the same good.
Not consuming something is definitely resulting in reduced overall consumption.
Reducing your consumption by 1 unit is definitely not reducing overall consumption by 1 unit (except ultra-local cases), but reduction is greater than 0.
I would guess that reduction by me not buying 1 unit is often at level of say 0.001 unit - what is good enough for me.
Governments will all adjust how drastic measures they implement based on the climate situation. As long as it's "not so bad" (we already passed that point a while ago, but it's mostly invisible so nobody cares) everyone can emit, and everyone will.
This will continue until economic sanctions and military threats start happening. Realistically, that will be far too late, hence my "lol".
At some point we'll just bite the bullet, proceed with stratospheric aerosol injection, and delay the consequences until we get a proper tech solution.
> Governments will all adjust how drastic measures they implement based on the climate situation.
Also demanding on demands of population. We have seen this in action (both where it was useful and where it was damaging).
Widespread demand for Foobar, especially where people clearly actually seriously want it can result in changes.
>proceed with stratospheric aerosol injection
Incoming shortwave (unlike outgoing longwave) affects things other than surface temperature - in particular, it affects the rate of photosynthesis. Shifting it by the amounts necessary to move world temperatures multiple degrees would cause all kinds of problems - most obviously, worldwide famine from crop failures.
> we already passed that point a while ago, but it's mostly invisible so nobody cares
Doesn't that kind of undermine your whole argument? If the effects are too invisible for control systems to kick in anyway the marginal CO2 emissions matter exactly as much as you would expect.
It depends how much lag there is in your control system - how early do you need to begin cutting emissions in order to prevent them from rising to levels that cause significant harm? You can't rebuild your entire power grid overnight, after all.
It's also possible that "the point where it's obviously visible" and "the point where the costs force a change" are different - e.g., if the Marshall Islands flood under rising sea levels that's going to be visible and tragic but may not have much global economic impact.
Partially, yes. But at some point a control system will kick in, and until that happens you're destroying your own quality of life for no real benefit - unless you do it very conspicuously and inspire people to do the same, I suppose, and they also do it conspicuously and...
...but I'm not holding my breath for that to actually work.
> destroying your own quality of life for no real benefit
Full scale destruction is likely a bad idea but there are plenty of things that can be done without massive costs - either with some substantial (at individual scale) impact and/or prominent signalling and/or good idea anyway.
Do people here like https://www.cochrane.org/? How trustworthy should I find it?
From what I've looked at, I like their style, and they seem credible.
I can't add much to what others have said about trustworthiness of Cochrane, though I am likely biased in any event (you'll find my name and fingerprints on the current handbook). But I can add that the people I have known who guide the methods and standards used by Cochrane are extremely thoughtful researchers. There is little glory and even less money in that work, so it attracts a lot of dedicated purists. Not a bad thing.
Though less well known, a lot of the same methodologists have contributed to the standards behind the Campbell Collaboration (https://www.campbellcollaboration.org/), which is a kind of sibling organization for the social sciences.
As others mention, they are the paradigm of "evidence-based medicine". However, "evidence-based" often means "are there any double-blinded randomized controlled trials, and what is the summary of those trials?", so they ignore lots of evidence that Bayesians would count.
This has some obvious advantages in objectivity and neutrality and disadvantages in accuracy and detail.
I like and trust Cochrane - note it is a collaborative and relies on the collective and voluntary contributions of an altruistic clinical research community, so if you believe evidence-based medicine is a good thing, please consider joining their crowd of volunteer paper reviewers https://crowd.cochrane.org/ - you'll learn a lot and contribute to science so what's not to love?
Cochrane Crown is surprisingly simple to contribute to, thank you, I'm glad to have found this!
They guide you through a few short tutorials, and then all it takes to start contributing is being able to classify if a paper is an RCT based on the title and abstract. So this is suitable even to a lay person.
I've also found https://taskexchange.cochrane.org/ which leverages more complex skills, though most of the tasks are beyond my knowledge and abilities.
They have a good reputation, but ... this is by the apparently very low standards of medical research. Here's an editorial by a former editor of the BMJ, and also "was a cofounder of the Committee on Medical Ethics (COPE), for many years the chair of the Cochrane Library Oversight Committee, and a member of the board of the UK Research Integrity Office."
https://blogs.bmj.com/bmj/2021/07/05/time-to-assume-that-health-research-is-fraudulent-until-proved-otherwise/
He says:
"the time may have come to stop assuming that research actually happened and is honestly reported, and assume that the research is fraudulent until there is some evidence to support it having happened and been honestly reported. The Cochrane Collaboration, which purveys “trusted information,” has now taken a step in that direction."
Note that he put "trusted information" in scare quotes, not me. So someone intimately involved with the Cochrane Collaboration is apparently not sure that they really are purveyors of trusted information.
On the other hand, if they are genuinely going to start auditing trials or just demanding data and checking it, that would already be better than what most journals are doing. So you can see it as both positive or negative.
I also think it is the gold standard for evidence-based medicine.
However, they will err on the careful side. So if a treatment is highly speculative and there is no strong evidence, then they will stress this point. This is different from bloggers like Scott Alexander, where the trade-off "coming up with some novel crazy ideas which are right 30% of the time" is perfectly fine. Cochrane does not put forward new ideas, they review existing evidence.
If Cochrane says something works, it likely works. Lots of things work without Cochrane agreeing, due to the standards they set. Things everyone agrees work can still get "insufficient evidence; more research needed".
They screwed up on acupuncture, though.
In my area of research, they were presented during a conference as a gold standard that my field could aspire to reach one day. I have no first hand experience though.
My gut feeling (that i can't quite explain) is that they probably really are best in class for what they're trying to achieve; but that they're still behind acx and the likes in terms of quality.
Yeah was wondering the same thing. Saw the Cochrane review on ivermectin concluded the evidence is not enough to make a conclusion one way or another, whereas other meta-analyses have found an effect..
I would look into family history. High cholesterol is very genetic, and most cholesterol does not come from the foods you eat.
https://www.health.harvard.edu/heart-health/how-its-made-cholesterol-production-in-your-bodyIt
Since the advent of statins, heart disease and strokes have dropped off precipitously. There is nothing wrong with trying to lower it with diet, but if you are genetically predisposed to high cholesterol, you may find it very difficult to lower it significantly. Forgive me if you already know this. I tried to lower my cholesterol with diet and testing, and it didn't budge much. But I already was eating a Mediterranean diet and fish several times per week. Fortunately where I live cholesterol tests from a lab are dirt cheap. The biggest drawback is that having blood drawn is not exactly fun.
I use www.insidetracker.com for blood tests. They only have packages, but really you want a bunch of tests anyway to understand at least your lipid profile - LPL, HDL, triglycerides, - because cholesterol per se is not all that informative. And you probably want to have some idea of your blood sugar too. I haven't used their at home kits though, just regular ones where you order online and then go into a lab for a blood draw.
No, I'm using them every few months, and I didn't realize "home" means "self-service tests" not "mail test kits". Is this a thing that exists for cholesterol?
You may want to look into trans fats - they are worse for cholesterol than meat.
https://www.mayoclinic.org/diseases-conditions/high-blood-cholesterol/in-depth/trans-fat/art-20046114
Every time, without fail, someone in real life mentions to me “I’m going to eat less meat and dairy and fat because cholesterol” I say “what about the hunter gatherers! They eat lots of meat and fat (not dairy, but there’s more modern herders to compare), but don’t get any heart disease or atherosclerosis and such”. And without fail, they say “that’s a good point, but I trust my doctor and idk what to do with that info”.
So I don’t think eating dairy and meat and fat is causative of “bad cholesterol” increases in any way that is unhealthy. And even without the data from various primitives, it still wouldn’t make much sense that a human diet staple that we would literally die without in nature (b12) is that harmful, especially since human meat consumption nowadays isn’t really that high by anthropological standards (iirc, not totally sure about that, ancient diets varied a lot). I’m not sure what explains all the evidence to be contrary though, lol.
Not necessarily supporting this, but some people believe it may have to do with regular fasts that hunter gatherers experience(d). It may also have to do with sugar, especially fructose, consumption, elevated cholesterol being downstream of that. And of course we can be pretty confident that exercise plays a huge role, and hunter gatherers certainly were much more active.
FWIW, I think it's partly genetic. I have always had low cholesterol, so low that's it's occasionally inspired doctors to attempt to treat it. But I've never restricted dairy, and only occasionally restricted meat. I tend to have 3 eggs a day with lots of cheese during the rest of the day. This is bad for my weight, but it doesn't raise my cholesterol.
That said, remember that they myelin sheathes around the fast nerves are cholesterol. The body makes it naturally, and, I think, the dietary cholesterol is broken down into it's components before being absorbed by the guts. So if you want to limit the dietary source of cholesterol, eat plenty of oat bran (e.g. oatmeal) and wheat bran (less effective I believe) with any meals that are high in cholesterol. Fiber tends to grab onto the cholesterol before it can be absorbed. Carrots may also be effective.
I’m pretty sure at this point there’s evidence that dietary cholesterol doesn’t matter. http://www.sapoultry.co.za/pdf-egg-info/mcnamara/Egg-Cholesterol/dietary-cholesterol-atherosclerosis.pdf (2000 lol) . There are a lot of large epidemiological studies claiming various results though.
> To date, extensive research did not show evidence to support a role of dietary cholesterol in the development of CVD. As a result, the 2015–2020 Dietary Guidelines for Americans removed the recommendations of restricting dietary cholesterol to 300 mg/day.
On the other hand, we here are not living the hunter-gatherer lifestyle. I do think the notion that too much meat gives you bad cholesterol is over-emphasised, because too much of anything is also bad for you - my vegan sibling just got kidney stone and was told it happens from too much tofu and coffee (guess what they consume an awful lot of?)
Some simple points to start at maybe if you are serious about your question (from someone who knows nothing of the subject - but feels like that dismissing what other people’s doctors recommend (with a naive explanation that can probably be better informed with some search) is not rational):
- How active is modern man in their daily life in comparison to a gatherer?
- How old did hunters get to become, and at what age do we get cardiovascular issues?
- Did they actually get no issues from their diet; what evidence do we have of this?
I didn’t intend to suggest it was unreasonable to take the doctors recommendation! Just that I think the current knowledge around nutrition is messy and don’t really know what the right answer is
For the second question, while the median and mean age of death in non modern populations, whether agricultural or hunter gathered, is quite low, a lot of that is infant and younger mortality - the distribution is very wide and a significant fraction of people live to old age
> https://www.gurven.anth.ucsb.edu/sites/secure.lsit.ucsb.edu.anth.d7_gurven/files/sitefiles/papers/GurvenKaplan2007pdr.pdf
This paper presents survivorship curves on page figure 2, which demonstrate that, although mortality is fairly constant, a significant number of them live to 60 and beyond. The authors hypothesize that evolution prepared people to be able to function somewhat well for “seven decades.” This does overlap with cardiovascular issue onset.
For 3, there is evidence from a variety of hunter gatherers, including Inuits, that they have a CVD rate much lower than modern diets. https://www.nature.com/articles/1601353.pdf?origin=ppub
But see here https://academic.oup.com/ajcn/article-pdf/71/3/665/23938349/665-667(11476)milton.pdf for an opposing opinion
1 is obviously a big problem.
IIRC the Eskimos (diet consisted almost entirely of seals - one of the few human cultures that could truly claim the title of "apex predator") did actually get heart disease at high rates. So at the tails of 90%-100% meat there definitely is a risk. Whether there's a significantly-elevated risk from 20% meat vs. 5% meat is a bit iffier.
Based on some studies I just read it looks like Inuits in the 19th century had a low rate of death from cardiovascular disease? Not sure
I recall seeing accounts of them having high rates when I looked it up a few months ago. Not sure what's going on here either.
There are LOTS of different causes of heart disease. And if heart disease would slow you down, and you live in a dangerous environment (I'm thinking "lions and tigers and bears"), you'd be quite likely to die of something else, even if you were so predisposed.
FWIW, my wife died of heart disease, and cholesterol played no part in that. Also most hunter-gatherers today don't have diets that high in meat, and certainly not dairy. Most of the calories probably come from gathering, but the hunting was important for protein. It also probably made the neighborhood safer to live in, as in "don't let that human see you, they're dangerous". But don't expect the diet to be high in cholesterol. Wild animals are usually rather lean, so even if they were a big part of the diet (usually counterfactural) it wouldn't be high in cholesterol.
I wish I could tell you about a test, but the only ones I know involve sending blood to a lab. I, personally, wish there were a test to determine the amount of starch and sugar in a dish of food. My wife wanted an easy and reliable test for sodium (she was on an EXTREMELY low salt diet). These sorts of tests don't seem to be available though. Sometimes I can see why, but often I think it's "lack of perceived demand".
> Ethnographic and anthropological studies of hunter-gath- erers carried out in the nineteenth and twentieth centuries clearly revealed that no single, uniform diet would have typified the nutritional patterns of all pre-agricultural human populations. However, based upon a single quanti- tative dietary study of hunter-gatherers in Africa (Lee, 1968) and a compilation of limited ethnographic studies of hunter- gatherers (Lee, 1968), many anthropologists and others inferred that, with few exceptions, a near-universal pattern of subsistence prevailed in which gathered plant foods would have formed the majority ( > 50%) of food energy consumed (Beckerman, 2000; Dahlberg, 1981; Eaton & Konner, 1985; Lee, 1968; Milton, 2000; Nestle, 1999; Zihlman, 1981). More recent, comprehensive ethnographic compilations of hunter-gatherer diets (Cordain et al, 2000a), as well as quan- titative dietary analyses in multiple foraging populations (Kaplan et al, 2000; Leonard et al, 1994), have been unable to confirm the inferences of these earlier studies, and in fact have demonstrated that animal foods actually comprised the majority of energy in the typical hunter-gatherer diet
https://www.nature.com/articles/1601353.pdf?origin=ppub
> Plant-animal subsistence ratios and macronutrient energy estimations in worldwide hunter-gatherer diets https://academic.oup.com/ajcn/article/71/3/682/4729121
Our analysis showed that whenever and wherever it was ecologically possible, hunter-gatherers consumed high amounts (45–65% of energy) of animal food. Most (73%) of the worldwide hunter-gatherer societies derived >50% (≥56–65% of energy) of their subsistence from animal foods, whereas only 14% of these societies derived >50% (≥56–65% of energy) of their subsistence from gathered plant foods. This high reliance on animal-based foods coupled with the relatively low carbohydrate content of wild plant foods produces universally characteristic macronutrient consumption ratios in which protein is elevated (19–35% of energy) at the expense of carbohydrates (22–40% of energy).
And the low fat claim isn’t quite true either based on my skimming that paper, although might be wrong.
I also vaguely recall extremely low sodium diets being a bad idea as sodium is an important ion
Philosophers usually distinguish "consequentialist" theories (where all normative vocabulary like "should" and "ought" and "good" and "right" ultimately derives from the goodness of consequences that an act or a character trait or a policy can be expected to have) from "deontological" theories (where all the normative vocabulary ultimately derives from the properties of acts, rather than their consequences) and "virtue" theories (where all normative vocabulary ultimately derives from properties of character traits).
"Utilitarianism" is usually taken to be a species of consequentialist theory where the fundamental concept of goodness is either pleasure and pain (for traditional Benthamite theories) or some more sophisticated concept deriving from whatever it is that individuals prefer or disprefer.
There is an interesting recent line of discussion in the literature about how many deontological and virtue theories can be equivalently reformulated in a consequentialist form, if the fundamental concept of goodness is based on something like maximizing the amount of unviolated rights, or minimizing the ratio of falsehoods to truths uttered, or something else. (https://philpapers.org/s/consequentializing)
My justification for the somewhat more sophisticated utilitarian view is to start with decision theory, that shows that anyone with non-self-undermining preferences must prefer among actions in a way that is equivalent to having some underlying utility function and preferring actions that lead to higher expected utility. Any such utility function is fundamentally connected to some sort of to-be-done-ness by definition. Even if *I* only care about *my* utility function, to the extent that there is anything that *we* (counting all rational beings together) should care about, it should be *our* utility functions. It seems to me that taking the moral perspective is taking the perspective that there is something that we all should care about.
There are still fundamental difficulties regarding the fact that different utility functions don't come on the same scale, so there is no obvious way to trade off utility for some against utility for others, but this is where I have basically come to in my thinking about this.
Perhaps tangential, but not all moral theories can be consequentialized:
https://www.journals.uchicago.edu/doi/10.1086/660696
Oh, and I meant to add - I would say that things like rights and truth-telling, on my view, get their value just because the diversity of individual utilities means that, in ordinary cases, individual utilities will be best promoted by giving individuals the information and ability to bring about what they want for their own life, and rights and truth are usually what helps with that. But when there are direct conflicts in fundamental desires (which empirically happens sometimes, but not most of the time), these can be overridden.
Yeah, that's the step that is fishiest.
My thinking right now is that for *us* to care about something is for at least *one* of us to care about that thing, at least when we are dealing with a relatively unstructured group like the set of all rational beings. (For very structured groups, like teams or corporations, there are much more specific rules that determine what that entity cares about, many of which can be quite separate from whether all or any members of the group actually care about the thing.)
I think the hardest point is arguing that there is or should be any thing that this gigantic "we" cares about at all.
I'll just leave this here:
A Proof of the Objectivity of Morals - Bambrough (1969), https://reddit.com/r/philosophy/comments/3etl9b/a_proof_of_the_objectivity_of_morals_bambrough/
Short answer: Nobody has yet discovered a way to *prove* any "ought", and I suspect no one ever will.
Long answer: Every honest, sane person will agree that their own subjective wellbeing "matters" to them. Just having the experience of living through various states of better or worse wellbeing will directly confirm that as an axiom. Who cares if you can make a formal proof of it? If someone denies this fact, they're either lying or unimaginably confused. Either I don't believe them, or I think they aren't worth dialoging with. (If you fall in this camp, please let me know, as I genuinely do not want to waste my time talking with you.)
Next, it's natural to extend this knowledge to other conscious beings, at the very least beginning with other cognitively normal humans. You may claim to believe you're the only conscious being who experiences varying states of wellbeing that matter to said conscious being, but if you do, once again I'm calling you a liar. Everyone knows beyond a reasonable doubt that they aren't the only conscious being in existence, and if someone somehow doesn't, again, they aren't worth dialoging with.
You aren't bound by anything to be a utilitarian from here. You are free to selfishly disregard the wellbeing of other beings since their wellbeing is something you don't have to personally experience. This is what we call psychopathy. To the extent that you are open about this view with others, you will find that you are excluded from societal planning or discussions of what society should regard as good or ethical. Acknowledging that wellbeing matters is a universal starting point upon which to build collaboration with others. I challenge you to find any other universal starting points like this.
You are free to pull some rule like "never lie" out of your tuchus, but the only way you'll be able to convince others that such a rule matters is by virtue of its impact on wellbeing (or possibly revelation, but let's set that aside). You won't be able to justify the exceptions which *don't* improve wellbeing, except perhaps by arguing that following such a rule strictly is more likely to overall achieve better wellbeing than attempting to individually determine which cases are exceptions. But that in itself is an instance of utilitarian reasoning.
If this bothers you and you demand a formal proof to hold some particular normative ethical position, I have bad news for you: no matter how hard you search and think and philosophize, you won't find it. Many have tried, none have succeeded, and neither will you.
Do you think you have a way of proving any "ought"? I want to hear one if so.
"Short answer: Nobody has yet discovered a way to *prove* any "ought", and I suspect no one ever will"
It's quite straightforward to derive instrumental (AKA hypothetical) oughts. "You ought to do X in order to achieve Y". If you you want apply that to morality, you need to figure out what morality is for.
This seems like a softer version of "ought" than what most people mean. Many would challenge your equating "ought" with "likely to produce such results". I tend to agree with this criticism, in a strict sense. It's the equivalent of being a compatibilist on the free will question. To compatibilists I say, "okay, but that isn't what I mean by free will," and I'm sure many moral nihilists would say "okay, but that isn't what I mean by ought".
Nonetheless, I agree that once two parties have agreed to accept the instrumental definition of "ought", they can proceed to discussing which axioms to provide a basis for Y. And that's basically what I and other utilitarians are doing. My only axiom is that utility aka wellbeing matters and hence the greater it is, the more desirable it is. I think all other axioms, when divorced from their effect on utility, are absurd and drawn out of thin air.
Morality is for maximizing wellbeing. If it's for anything else, I simply don't care about it, and I instead care about whatever you call maximizing wellbeing.
>This seems like a softer version of "ought" than what most people mean. Many would challenge your equating "ought" with "likely to produce such results". I tend to agree with this criticism, in a strict sense. It's the equivalent of being a compatibilist on the free will question. To compatibilists I say, "okay, but that isn't what I mean by free will," and I'm sure many moral nihilists would say "okay, but that isn't what I mean by ought".
The compatiblist definition of free will isn't entirely wrong ..if you can't do what you want, you are lacking free will ,in a sense. But maybe not the only sense.
As with compatiblism, I find it hard to deny that "would" and "should" have instrumental uses. The question is whether the "soft" usages are the whole story.
Why do you need a hard version of "ought", and why would does it mean? As far as I can see, the distinctive character of a moral "ought" is that it is in some way categorical, universal or obligatory.
There are some things you ought to do to build a bridge, but your are not obliged to build a bridge, so you are not obliged to do them. You can say that you don't feel like building a bridge, but can you say that you just don't feel like being moral?
But "universal" isn't quite universal, because you are not required to follow most or all moral rules if you are all alone on a desert island....becsue, there ls no one to murder or steal from, or even offend.
So morality is "for" living in a society, and it is only universal.in that it applies to everyone in a society, and it is only obligatory in the sense that you can't excuse yourself from it and stay in society.
Yes, I agree that there are (at least) two different meanings of free will. Compatibilism is correct in one (the "weaker") sense. The problem is when people try to claim that being right in that sense makes it right in the other (the "harder" sense). Same goes with ought. I think what Parrhesia was asking for was more the "hard" version of oughts, and I contend that those cannot be proven.
Deriving an argument for the softer, instrumental version of ought requires a Y (in reference to your earlier use of Y), or an objective. You need axioms before proving that instrumental ought, and the only universal axiom here is that wellbeing matters.
How to make the world a better place is a bit of a first world problem. If you have the resources, it's well worth thinking about...but what if you don't.? (Note the title.of Singer's book, Living High and Letting Die...not everyone is living high) Historically, most people were living hand to mouth. If they didn't have spare resources to make anyone elses life better , were they therefore immoral (or amoral)?
But everyone has the ability to avoid making things worse.Deontological "thou shalt not" rules prevent one person reducing anothers utility by stealing from them, murdering them, and so on. So if you define morality as something that's mostly about avoiding destructive conflicts , and avoiding reducing other people's utility, then a consequentialistic deontology is the best match. Which is somewhat circular , too.
Are utilitarians utilitarian in practice? They strongly obey the law of the land, which is course deontological. And obeying the law of the land would prevent the more counterintuitive consequences of utilitarianism, such as feeling obliged to kill people in order to harvest their organs to Dave lives. So in fact, that kind of utilitarian is following deontological , thou shalt not laws, and only using utilitarianism to guide them about what they should do with their spare resources . And what they do with their spare resources is entirely an optional matter as far as the legal system and wider society are concerned...they are not going to be punished it ostracised for their choices. Yet they summarise the situation as one in which they are just following utilitarianism, not one where they primarily follow deontological obligations with utilitarianism filling in the voluntary, supererogatory stuff. (Inasmuch as they stay out of jail, they never break deontological obligations. The money they give to charity is what remains after theyve paid their taxes).
There's a "Pareto" version of utilitarianism, where you are not allowed to reduce anyone's utility, even if doing so could increase overal utility.
Not all utilitarians believe in it, and it doesn't seem derivable from vanilla utilitarianism. So it would seem to be a case of bolting on a deontological respect for rights onto utilitarianism.
People aren't "immoral" or "amoral" for not taking an action that wasn't even available to them. Obviously. No utilitarian in existence thinks that's the case. That's like saying you're immoral for not curing cancer. What a ridiculous argument.
I strongly disagree that killing random people to harvest their organs is actually a consequence of good utilitarianism (except in some very specific circumstances). I think the consequences of normalizing such an act would be horrific overall, don't you? I mean, think about it for more than 10 seconds. Calculating utility *is* extremely complicated, and it's worth exercising caution when someone suggests a seriously counterintuitive action, rather than just blithely going by what your Ethics 101 professor told you a utilitarian would do. I am not making any claim that utilitarianism provides easy answers to moral dilemmas. Simply that it is the one *ultimate* metric by which to judge which decisions are better than others.
The thing I'm mostly trying to counter is the phenomenon in which someone argues for a harmful course of action, has it pointed out to them that that course of action is harmful, and then still defends it purely on deontological (or other non-utilitarian) grounds. E.g., someone saying "It doesn't matter if this results in worse wellbeing; lying is always wrong."
Also, of course utilitarians strongly follow the law. Breaking the law, even for good reasons, tends to have bad consequences that fall especially hard on the person doing the lawbreaking. That doesn't mean that sometimes breaking the law and risking suffering the consequences isn't ever the right thing to do. But utilitarians, like any other humans, have selfish tendencies, and they aren't magically perfect actors by virtue of recognizing what the metric for a good decision is.
I don't think self describedn utilitarians want to harvest organs. I do think utilitarianism recommends it. Therefore, the "good" utilitarianism they practice is different to the textbook utilitarianism. It's not that self described utilitarians are evil, it is that they actually contractarians or rule consequentialism something.
This debate started with a question about obligation. Utilitarianism has two bad answers to that. One is that you are obliged to maximise ulility relentlessly, so that everyone except thanks is failing in their obligations. The other is to abandon obligation...so that utilitarianism is no longer a moral system in the sense of regulating human behaviour. What is needed is something in the middle.
Rule consequentialism can show that: you are obliged to follow the rules so long as they have good consequences , but the rules should not be excessively demanding.
So the problem is soluble, so long a you get deconfused about what utilitarianism is and isn't.
I'll add that classifying decisions as "moral" or "immoral" isn't even really a thing in utilitarianism. There are gradations of consequences. A given decision can be very good, but not optimal. If you want to define anything less than optimal as "immoral", fine, but I don't think that's helpful or useful.
>but the only way you'll be able to convince others that such a rule matters
My general model for how moral reasoning is supposed to go is "what is good?" -> "how do I achieve the good?" which inherently involves "how do I convince others that the good is good?". Here you seem to be arguing that X is not good because others cannot be convinced of X's goodness, which inverts that priority.
It's kind of hard to argue base axioms like these, but I don't like the ethical edifice which results from "X is good iff people can be convinced X is good". For the most obvious example, Hitler did a pretty good job of convincing people that anti-Semitism was good; more generally, it reduces ethics to "whatever is most memetically fit" and I find that profoundly ugly.
(Also, a lot of people do seem to be deontologists, so I'd question your assumption that Kant is doomed to get no traction ever.)
No, I'm arguing that X is not proven to be good until a good argument exists that proves it's good (except of course by virtue of utility). And because of that lack of a good argument, I'm merely pointing out that you won't be able to convince anyone that X is good unless they already happen to agree that X is good. I invite you to make arguments for why X, Y, or Z are good, divorced from impacts on utility. I just doubt you'll make a good argument for any of them. I've yet to hear one.
People are more or less born into deontology. You can't just tell a kid "do what maximizes utility" because kids are stupid and lack the experience and reasoning capabilities to execute that instruction well. You get better results explaining to them that there are rules that must be followed, so we pretty much all start out from there. Many people come to realize that such rules aren't *inherent* truths and become consequentialists as they grow older and wiser. I don't think I've ever heard of someone going from consequentialist *to* deontologist though, except in cases of people who adopt a new religion which has prescriptive ethics based on some irrefutable revelation or something. Which brings up a broader point - many people's deontology is tied to their religious beliefs, as basically every religion tries to define ethical behavior with rules and aphorisms and whatnot. And lots of people are religious, though fewer nowadays than back in Kant's day.
You can go from consequentialist, NOS, to rule consequentialist.
Is rule consequentialism very different from deontology?
>I don't think I've ever heard of someone going from consequentialist *to* deontologist though, except in cases of people who adopt a new religion which has prescriptive ethics based on some irrefutable revelation or something.
I used to be more of an ethical hedonist than I am now. Utility monsters are annoying and there are some perverse incentive problems.
>I'm arguing that X is not proven to be good until a good argument exists that proves it's good (except of course by virtue of utility).
The problem is that there isn't a good argument that proves utilitarianism is good either, and I can't distinguish your argument for using it as a starting point ("people agree on it and will ban you from everything if you disagree") from memetic Darwinism and/or argumentum ad baculum.
Really? What are those perverse incentive problems? If utilitarian reasoning is leading you to worse outcomes than other forms of reasoning, it just means you're doing utilitarianism poorly.
I agree that no argument whatsoever *proves* any moral theory. Call me a moral nihilist, I really don't care. But from that position, I'm imploring you to recognize the universally acceptable foundations of utilitarianism. Call this intuitionism, again, I don't care. The point is that the only grounded reason for any decision being "better" than another must reference back to how it ultimately affects wellbeing. Nobody can sanely deny that that consideration matters. *Some* people might feel there are other considerations, and I try to make those people recognize that their reasons in support of those other considerations are always either A) referencing back to utilitarian considerations, or B) just because. And if their reason is B, I've usually hit a wall.
>The point is that the only grounded reason for any decision being "better" than another must reference back to how it ultimately affects wellbeing.
No. There are multiple moral foundations - Haidt's classification is care/harm, liberty/oppression, fairness/cheating, loyalty/betrayal, authority/subversion, and sanctity/degradation.
It is true that among the WEIRD, the latter four are atrophied (to a greater or lesser degree). But this is not a proof that they are meaningless, just as the existence of psychopaths is not a proof that morality entire is meaningless.
(I feel obliged to note at this point that ethical hedonism is only one form of consequentialism; there are ways to bake the other foundations beyond care/harm into a utility function, or to value consequences without the use of a global utility function.)
The perverse incentive problem is that in an ethical-hedonist society one is incentivised to, as far as is possible, convert oneself into a utility monster and/or present oneself as a utility monster. There are ways around this, but they all essentially boil down to implementing fairness/cheating.
One slight correction: some people are convinced by bad arguments. So I suppose you will be able to succeed in convincing some of those people that X is good in the absence of a good argument for why X is good.
I agree with most of what you say. I think that utilitarianism or rule utilitarianism covers 99% 0f issues. My "ought" question is why "ought" I include others in my utility function? I do or think that is my reason, because I am a Theist, specifically a Catholic Christian.
Interesting. I used to be Catholic, and when I was, I approached ethics from a less utilitarian position than I do now. Not completely deontological, but I was more in the habit of referencing moral "rules" and weighing them against one another than I am now. This seemed consistent with the dogma element of Catholicism.
As for why I decide to include others in my utility function, it begins with the recognition that those others possess consciousness and also experience suffering and joy. I simply cannot bring myself to believe that my pain matters more than another being possessing the same capacity to experience pain (even though I, of course, often do prioritize my personal wellbeing over that of others when it comes to taking action). Alas, this is an understandably natural tendency for us humans, but I can at least recognize it for the irrational, biologically driven bias that it is. Part of why I advocate for utilitarianism is just that it's practical. Recognizing that utility matters to every individual is something everyone can do. When we profess to care about utility, we are saying to others "I value your wellbeing as you value your own, and I expect the same in return." And that's a pretty easy thing to get on board with. It's pretty darn close to the golden rule, which is perhaps the most well-accepted ethical rule I can think of. But once you throw in weird additional specific rules, like "don't lie", that's when people start jumping ship.
Some people do also possess the intuition that lying is *inherently* wrong, and, well, that just leaves the rest of us scratching our heads. Like what does it even mean to be *inherently* wrong? In what way? Because it checks the wrong box? It seems to me like something like that could only be true if theism were correct and some omnipotent being did indeed reveal such a cosmic truth via revelation.
Funny thing, but even an omnipotent being doesn't solve the issue. Suppose that Gods reveals Themselves and tell us that doing X is objectively inherently wrong, unrelated to any our intuitions or utility calculations. What would that mean?
If God would punish us for doing X or reward us for not doing X that would affect our utility calculations and be meaningful. But otherwise why would we care what's Gods point of view regarding X?
I was imagining a world where omnipotence extends to the ability to objectively define concepts. But in a strict sense, I agree with you, and doubt such a world is even conceptually possible.
Yeah. I'm just really amused by how eager we are to assume that the existence of God would solve some philosophical problem but as soon as we think about it some more, it becomes clear that this actually would change nothing.
Maybe not 99%, but probably 95%. There are a lot of things in common between the vast majority of moral codes.
As far as I know, there is no Catholic consensus that consuming marijuana is a sin. I imagine the interpretation is similar as with alcohol, and that there are subtle differences between responsible, acceptable use, and sinful indulgence in which you allow some of your moral decision-making abilities to be weakened.
But I agree in general that Catholicism is a dogmatic religion and has things like the catechism, which define wrongdoing rather explicitly, and without reference to consequentialist reasoning.
I think a fair number of utilitarians would also caution against marijuana use, pornography, and other similarly indulgent activities in lots of circumstances. It is totally possible that seemingly harmless pleasures could have insidiously bad effects for utility when everything is taken into account.
But I suspect that most people only have the intuition that "lying is bad" because lying often leads to less utilitarian outcomes. If lying somehow consistently lead to positive consequences, would you still have that intuition? Same goes for "don't violate someone's natural rights". In most ordinary real-life applications of that rule, not violating someone's "rights" IS the utilitarian thing to do, and so utilitarians can still earnestly promote it, at least as a useful rule of thumb if not an absolute moral principle.
I'm curious if you have any ethical intuitions that can't be tied back to utility in this way. Is there any ethical precept you would advocate for, that in most ordinary real-life applications of it (as opposed to in thought experiments concocted by moral philosophers), tend to produce outcomes that run counter to, or at least orthogonal to, utilitarian concerns?
Sorry, but I completely fail to see how saying "I have this intuition" translates to an ought.
But fine. My intuition is that we "ought" to maximize utility, and that's my only moral intuition. The fact that it intuitively seems like I should act to maximize utility is a good reason to maximize utility. You could say this about anything. Every person has the intuition that wellbeing is something that matters, and if they claim otherwise, you can simply disregard them as crazy or a liar. No other ethical intuition is as universally shared. Not even close. "Lying is bad" is a qualified belief of mine only by virtue of the fact that I think it is often harmful. In cases where it seems likely to be net beneficial, it simply isn't my intuition that it's wrong. I strongly disagree that the blanket statement "lying is bad" is a pretty universal intuition **regardless of utility maximizing considerations**. If lying was something that was known to tend to result in good outcomes and happier people, wouldn't everyone's intuitions would be that lying is good? It's precisely those utility maximizing considerations that make it intuitively feel wrong to most people. Ethical intuitionism could very well be an effective strategy for maximizing utility, but that doesn't mean the intuitions themselves objectively point to any oughts. You can't prove an ought. Deal with it.
Most people absolutely rely on utilitarianism whether or not they consciously realize it. Whenever someone recognizes that one of their moral dictums leads to a repulsive conclusion in terms of wellbeing, they almost invariably call upon some other moral dictum which "intuitively" feels like it has priority in that instance. Funny how that happens. You do it too. From reading your comments on less meta topics, I do find you to be really insightful and thoughtful. You seem to reason like a good utilitarian, in my view. But then when you explain the reasoning behind your reasoning, you call upon all this extra fluff, and I'm completely baffled as to why.
And no one calls you a psychopath because I bet you don't explicitly tell people you disregard their wellbeing. Of course, I doubt you actually *do* disregard other people's wellbeing in the first place. I wasn't talking about you there. Just a hypothetical person who actually wants to dispute that the wellbeing of others matters, my point being that practically nobody will take that view.
Here on ACX. Don't think I've heard of DSL. And thanks to you as well! This is one of those topics that can get me worked up, so I hope I didn't come off too harshly.
Well how does one get ANY oughts?
Utilitarianism can't tell you what to maximize, it just tells you how to maximize.
Most/many utilitarians use simple utility functions around things like 'reduce suffering and maximize flourishing' or w/e, but that is an arbitrary decision made because those seem like good ideas to the type of people who are utilitarians.
Taboo "objective morality" and ask your question once again.
Is there some set of rules, condeming actions good or bad, unrelated to our intuition and utility calculations which is somehow supperior and which we should follow even against our own value system?
No.
Can our values, moral intuitions and proposed utility functions be presented as an approximation of some other utility function, which we would prefere to follow if we knew better and were the kind of people we wish we were?
Yes.
To the extent that morality is applied game theory, for which we have both evolved intuitions and cultural consensus, it makes sense to claim that it has an objective basis. Beyond that, any nuances that arise from idiosyncratic circumstances of a certain society or the human condition as a whole are essentially arbitrary.
He may not, but I do. Normatively speaking, anyway. I guess there might be a "true" "objective" morality in the sense of some sort of average across each individual's morality, and I suppose that your intuitions are probably accessing something like that (limited in some way to the individuals that make up your cultural heritage or whatever).
You can't know your premises to be true. You choose your 'oughts', you can't discover them from the outside world - there is no 'ought' in physics. From an utilitarian perspective - meaning having already chosen that we ought to maximize utility - follows that sometimes we should lie because lying maximizes utility (by preventing the axe-murderer from murdering your friend) and, uh, I don't know how you define natural rights but assumedly there are situations where they should be violated to maximize utility.
Let's say you decide to be a counter-Utilitarian - you want to maximize suffering instead.
I can't prove you wrong using logic or science. I *might* be able to say that you don't seem to act that way in practice, and for instance avoid suffering for yourself. But maybe you don't do that, and you actively seek out suffering for yourself and others. I could argue that this against the moral intuitions of the overwhelming number of people, but you're within your rights to shrug and say that they're wrong. I could argue that hardly anyone *wants* this, but if you don't care what people want, why would it matter? Ultimately, I may not be able to convince you.
Would you be wrong? There are two answers here. I would argue that you're *mistaken*, and that we should not in fact maximize suffering (but I can't really prove it to you). And I could also say it's extremely likely that if you acted in accordance with your counter-Utilititarianism, it would outcomes that I, every other Utilitarian, and the overwhelming number of people, find bad, and that such actions are immoral. You, of course, would not agree, and you would find Utilitarians as abhorrent as we find you in this thought experiment.
If you just want to maximize the amount of Truth-statements, or perhaps minimize the amount of False-statements, that wouldn't be nearly as abhorrent, but it seems like a super weird objective. The kinds of things you should do if this is what you want seem absurd. It's just one step beyond paperclip-maximization.
1. This is true. I don't. But I would say that this makes me a bad utilitarian, the same way some sinner could be a bad Christian.
2. I would disagree about this. The utilitarian solution to the trolley problem, for instance, disagrees with Kantian and Christian morality, but agrees with the moral intuition of the large majority of people. If you wanted to test moral systems against intuitions, I don't think you would find anything that beats utilitariansim. Medical ethics lean _strongly_ towards utilitarianism. You have to construct very complicated situations - the Fat Man might be one - before moral intuitions start to go strongly against utilitarianism.
3. True, and this is very interesting. Christian morality is motivating - you will burn in Hell if you don't do the right thing. But utilitarianism isn't - it's abstract and doesn't push itself on you. To quote Brave New World: “Happiness is a hard master – particularly other people’s happiness. A much harder master, if one isn’t conditioned to accept it unquestioningly, than truth”
4. Again, I disagree. It's highly unlikely that the Fat Man has ever *actually* happened in the history of mankind. Meanwhile, it's trivial to come up with real, actual examples where lying is the right thing to do.
Not wrong at all, because there is no "right" and "wrong". Those are concepts invented by the human mind. There's no stone tablet of rules in the universe about what is right and wrong. I am simply a utilitarian because, when peering into my soul and asking myself "what rules are in the stone tablet there?" i see
1. I am conscious
2. My suffering matters
3. Suffering is bad
From those, you can get to utilitarianism, assuming you also see those rules in your soul. But if you see different ones, well, then you'll see different "rights" and "wrongs". And no one can say, objectively, your right is right, because there is no objectivity here. Moral relativism (at least in the weakest sense), is true.
Actually, it might be more correct to call this moral nihilism instead of relativism. Moral nihilism is true. The weak form of moral relativism (everyone thinks they are the good guy in their story) is true. The STRONG form of moral relativism (everyone is the good guy in their story and thus we shouldn't judge them) is false.
Cultural and moral relativism would not typically apply, although there could be situations where a certain act results in a net positive in some society or time but not another. It's possible a medieval monk would suffer more from not being allowed some self-flagellation, for instance, even though you should stop your kids from doing it.
But this is just because different things can cause different amounts of happiness or suffering depending on context.
Oh no i agree, i meant relativist in the kind of universal sense. Like, if someone tortures a baby, all of humanity agrees that is bad. But there is no objective metric we can use to say that it's bad, just our "intuition". God isn't going to come down and say NO. BAD.
Morals are an invention of humanity. We kinda just made them up and pulled them from nothing. It's not like math, where 2+2=4 always and forever. Baby torture, to other people, or other species, isn't evil! https://www.greaterwrong.com/posts/n5TqCuizyJDfAPjkr/the-baby-eating-aliens-1-8
You very likely *should* sometimes lie, for instance when an axe-murderer is searching for the friend who is hiding in his house.
And it's an odd utilitarian who accepts inviolable natural rights beyond having whatever serves as utility. How would that even work?
*Your* house.
Because not lying in a particular situation would lead to bad consequences with regard to total net utility in my best judgment?
You mean, "why be a utilitarian?" My answer would be that happiness, unlike truth-telling, is *inherently* valuable. I can't use science or formal logic if you disagree, but it seems easy to set up a situation where telling the truth would go against moral intuitions, common sense, or being a decent person.