I have been doing a lot of reading about trying to create an AI that has the somatic responses of human beings which I guess is the issue I’m hung up on. So I’ve been trying to educate myself to the state of the art. In light of this. I realize some of my comments are a little like a middle schooler sitting in on a PhD class and asking rather dopey questions. I’m a little embarrassed.
However I am struck by two threads in these comments; discussion of AGI And the questions about parenting and what might be the most effective way to raise one’s child.
That’s kind of fascinating in itself to me.
I have raised a couple of kids and some of the issues seem to really crossover
Intresting question. Well like today, we have fat and slim people. Fat people are still made fun of all time, even though they shouldn't be. In a moral society, people would still be treated the same even though they would be able to tell who is richer or poorer depending on how they look. But what is likely to happen is that people will prefer to date/hang out with someone with the "rich"-looking trait than the one of the poor looking trait.
In choosing a spouse, surely being overweight or impoverished would (should?) make a difference. There may be benefits to holding less superficial values, but likewise for observing these common biases.
To what extent is choosing friends/acquaintances/business partners a matter of degree vs a fundamental difference?
What if some politically influential parts of society complain about racial inequity?
Usually they make the assumption that the distributions of intellectual ability or aggressiveness are the same for all the visibly identifiable subpopulations. And a logically consistent person who shares that assumption would have arrive at the same conclusion.
Scott you are one of the few American Jewish public figures who has talked about the Jewish intellect without being apologetic.
It's understandable that many Jews shy away from this topic; after all gentile geniuses tend to be excessively humble too.
Do you think there are differences on average between the various gentile ethnic groups?
If you think there are such differences do you think the general public should be educated about it to counter the racial inequity narrative?
Also in a country that provides a lot of assistance to the poor citizens do you think everyone should have the right to make as many little citizens as they want? Or maybe depending on their genetics that should be controlled?
I'm frustrated and sad that good things I liked might end or might at least be different because people I liked did dumb shit.
Current Affairs was my favourite magazine, but then NJR handled the recent staff thing really badly (he flip flopped on how much authority he wanted, it blew up during a hiring decision between two great candidates—the biggest waste is that the candidate he liked would have been the staff's fave too he just didn't want the decision to be democratic anymore!). Now Lyta, Nate, and Adrian have all written open letters about it and the mainstream and conservative media are talking about it and not all publicity is good publicity, and who knows if the magazine'll even last.
Aubrey de Grey's the bearded longevity-research guy who founded the SENS foundation. Now there are sexual harassment allegations against him; who knows how it'll turn out, but also he idiotically interfered with the investigation (emailing a mutual of one of the allegers to persuade her to implicate someone else, instead of just talking to the investigator), so SENS obviously had to let him go. His idiotic decision is bad for himself of course but also bad for longevity research, which really needs to behave professionally to get respect and shake its image as just, idk, Peter Thiel's rich-people-out-of-touch-with-normal-people's-struggles-pipe-dream, and show that it's a safe and thriving place for young scientists including women to work.
David Sabatini, another big name longevity-research guy (an expert on rapamycin and mTOR) also just got let go from the Whitehead Institute for sexual harassment allegations, and in this case the reaction on twitter makes it sound like it was a really open secret so I'm guessing that they're true whatever they are (no details are public).
The pandemic has done disproportionate damage to worse-off people. What would pandemic response look like in a more egalitarian society? How would risky in-person work be handled?
What David Piepgrass said. As for risky in-person work, what couldn't be eliminated might be done by volunteers from the less vulnerable part of the population (young, healthy), being paid a significant risk premium - and supplied with decent PPE.
Of course this implies that old Joe with emphysema, the expert/foreman in the meat packing plant, would be able to get his income replaced while not working, and/or training for a new role. That would be a very hard sell in the US. And probably a hard sell to him too; few people want to become newbs again.
It's arguable that the reason stockpiles of PPE were inadequate at the start of this was the never ending drive for higher profits and lower costs. I see recent stock market darlings causing major damage locally (hello, PG&E) by cutting corners in the past to enhance their bottom line. There's no reason a government should do the same, not being driven by profit - except that habits of thinking may not follow logic. Hence the expired and non-existent items in US and other government stockpiles.
So there might actually have been adequate PPE at the start of this, at least for those with the highest risk work (treating the sick).
If the concept of "egalitarian" could include the culture of the people themselves, we'd have a lot less Covid going around since people would take vaccines to help their community reach herd immunity. I watched a guy named Yuri yesterday, friend of Weinstein, debunking lots of claims by Weinstein. I thought he did a good job, but he made classic mistakes like decrying the anti-vax campaign as immortal because it's, you know, killing people. Well, judging by YouTube comments, most red tribe members couldn't hear anything he said after that.
An egalitarian society would say "Covid tests are free, and if you catch Covid we'll pay for you to quarantine in a hotel and immediately pay replacement wages for the work you missed. Also, we need you to think hard about who you came into contact with so they can be tested immediately."
> Oh come on. Where's the National Socialist German Worker's Party these days? What happened to Tojo? The Brits certainly fixed the Boer's wagon, home guerilla advantage notwithstanding. There are no Nationalists left in mainland China, and no followers of the late President Thieu bombing the occasional railroad bridge in Vietnam. It is certainly possible to win against a weaker opponent even if he's got the home-ground advantage, can melt into the countryside, is willing to live in caves and eat rats and don suicide bomb vests. It's entirely possible to wipe those people out, root and branch. But it requires focus and commitment, and quite often some pretty ugly decisions. You have to be damn sure that's what you want to do. Being half-assed about it, and not entirely certain what you're trying to achieve definitely doesn't work, and never has.
I don’t think that all of your cited examples are at all applicable.
Nazi Germany certainly not, China, certainly not.
The one exception is Vietnam, and we all know how that turned out.
Am I the only one that gets this issue, where I get an email that someone has replied to me and I hit reply and for a brief moment my comment is highlighted on my screen and then it defaults to some random place in the order of comments?
This is a pretty good indication that there is something about mail in iOS and the comment section here that doesn’t play nice together. I will try it on my computer and see if it persists there.
Sometimes people's blood pressure drops after eating. For some of those people, it drops into dangerous territory, but I wonder whether eating frequently would help some people lower high blood pressure.
I work as a contractor for the US Federal Government, and their rollout of the vaccine mandate has been troubled to say the least. A brief timeline:
2 weeks ago: They are developing a plan to implement Biden's executive order requiring vaccines or recent COVID testing to enter Federal facilities. More information will be released soon.
Yesterday: Starting two days from now, everyone must either sign a sworn affidavit saying they are fully vaccinated or provide a negative COVID test taken within the last 3 days. They aren't sharing vital details like who will pay for the tests, how security officers will evaluate COVID testing results, how they will verify who has signed the affidavit, and how this won't cause massive delays as hundreds of employees enter the building at the start of the shift. You still need a test even if you have received a vaccine if it has been less than two weeks since your last dose, providing an incentive to get J&J rather than Pfizer/Moderna. Most of this doesn't matter though, as most of the unvaccinated will just lie on the form knowing there probably won't be any negative consequences for doing so.
Today: Implementation of the mandate has been delayed. More information will be released soon.
Conclusion: Parachuting for charity costs more money than it raises, carries a high risk of serious personal injury and places a significant burden on health resources.
"The Deeper Crisis Behind the Afghan Rout: Observers abroad see the culmination of decades of American incompetence." By Walter Russel Mead | August 23, 2021
This isn’t a conventional credibility crisis of the kind President Obama faced when he backed down from his Syrian red line. America has demonstrated its commitment to Afghanistan for 20 years and had no treaty obligation to defend the former Afghan government. A competently executed withdrawal could have enhanced American credibility among some Pacific allies, especially if it was accompanied by clear steps to build up U.S. forces in East Asia.
The Afghan debacle doesn’t create a crisis of belief in American military credibility. Informed global observers don’t doubt our willingness to strike back if attacked. The debacle feeds something much more serious and harder to fix: the belief that the U.S. cannot develop—and stick to—policies that work.
Neither allies nor adversaries expected perfection in Afghanistan. Mr. Biden was right to say that the end of a war is inevitably going to involve a certain amount of chaos, and world leaders likely didn’t anticipate a seamless transition. They did, however, expect that after two decades of intimate cooperation with Afghan political and military forces, the U.S. wouldn’t be blindsided by a national collapse. They didn’t think Washington would stumble into a massive and messy evacuation crisis without a shadow of a plan. They didn’t expect the Biden team to have to beg the Taliban to help get Americans out.
It all fuels fears that the U.S. is incapable of persistent, competent policy making in ways that will be hard to reverse. It seems increasingly evident that despite, or perhaps because of, all the credentialed bureaucrats and elaborate planning processes in the Washington policy machine, the U.S. government isn’t good at producing foreign policy. “Dumkirk,” as the New York Post called the withdrawal, follows 20 years of incoherent Afghanistan policy making. Neither the past two decades nor the past two weeks demonstrate American wisdom or the efficacy of the byzantine bureaucratic ballet out of which U.S. policy emerges.
I think everyone realises it was always going to be a mess, but the question is, was it more of a mess than it needed to be? Right now, while it's not great, the Taliban seem to be relatively keeping their heads down (yes, they're executing police chiefs and hunting down journalists, but they're not on a rampage as yet) so it's bad but not rivers of blood bad yet.
And I think that "yet" is what we're all waiting to see: once the Western forces and Western civilians are gone, and it's just the Taliban, what are they going to do?
I can't really blame the army for collapsing as it did; it seems (and again, I'm only going by what the news is telling me, and who knows how true that is?) that any of the officers who had personal power-bases before the US came along to prop up a puppet regime, or who are convinced anti-Taliban, have headed north or back to their own tribal bases to start a resistance, rather than staying and fighting with the army in the expected, conventional sense - and that's probably part of the problem right there, that they trust their own tribal alliances or their own picked men, rather than having any confidence in the troops under them and the army as a coherent national force.
For the rest of the soldiers, it's "stand and fight, but when the government is handed over to the Taliban - and they're already in discussions about handing it over - then anybody who fought the Taliban is for the chop" so of course they got out as fast as they could. Wearing the uniform is painting a target on your back. Why stay and fight for a regime that is already working on surrendering?
It's interesting that Hamid Karzai is still a power broker - so, did the Americans back the wrong horse with Ashraf Ghani, or was it a case of "at the time he suited our purposes"?
See Carl Pham's comment below. You can decide to leave the party, and take your glass and your plate into the kitchen, and find the hostess and thank her for the lovely evening. Or you can sneak out without cleaning up or saying anything to anyone.
I'm trying to find background on Ghani and why he fled but Karzai is apparently secure enough to not alone stay, but be involved in talks, and reading the Wikipedia article on him is just one 🤦♀️ after another:
"He is also the co-founder of the Institute for State Effectiveness, an American organization set up in 2005 to improve the ability of states to serve their citizens.
In 2005 ...Ghani gave a TED talk in which he discussed how to rebuild a broken state such as Afghanistan
Ghani ran in the 2014 presidential election securing less votes than rival Abdullah Abdullah in the first round, but winning a majority in the second round. Following political chaos, the United States intervened to form a unity government."
I swear, the main difference I see between Ghani - who was conciliatory in speech at least towards the Taliban - and Kharzi - who actively fought against them, yet one has to flee and the other can remain in the country, is their relations with Pakistan: Ghani was cool to frigid with them, Kharzi was on good terms.
Which does sound more like Pakistan pulling the strings of the Taliban. Which is its own entire problem, because this is more smoke and mirrors - what you or don't do when you're negotiating with the Taliban may be an entire waste if Pakistan is in the background putting its thumb on the scales.
I don't usually support Biden, but in pulling out of Afghanistan, he showed political courage and far sightedness. The fall of Kabul is often compared to the fall of Saigon, but the lesson of Saigon isn't that the US should have stayed bogged down in Vietnam for another 20 years. It's that the US should have pulled out much sooner. As Biden said:
"We spent over a trillion dollars. We trained and equipped an Afghan military force of some 300,000 strong — incredibly well equipped — a force larger in size than the militaries of many of our NATO allies. We gave them every tool they could need. We paid their salaries, provided for the maintenance of their air force — something the Taliban doesn’t have. Taliban does not have an air force. We provided close air support. We gave them every chance to determine their own future. What we could not provide them was the will to fight for that future."
Despite the overwhelming financial and military advantages the US gave the Afghan government, the Taliban overran the entire country in just weeks, taking half the country in just a few days. Why? Because 99% of the population supports Sharia. 92% of Afghan *women* support wife beating (https://www.prb.org/resources/most-women-in-afghanistan-justify-domestic-violence/). 79% of Afghans think you should be killed for leaving Islam. Why would Afghans support an immoral, corrupt, incompetent, foreign-backed infidel puppet regime with values diametrically opposite to their own? One that can't even deliver internal peace, the most basic requirement of any state? One propped up by warlords with child sex slaves (https://www.nytimes.com/2015/09/21/world/asia/us-soldiers-told-to-ignore-afghan-allies-abuse-of-boys.html)? The Afghan puppet regime had no legitimacy. The Taliban does, because the people support its brutal theocratic values.
The Afghan people got the government they wanted. If the Taliban ends the 50 year long civil war, puts an end to the child sex slavery, and provides even the most basic government services, they will have done more for Afghanistan than any government since at least the Soviet invasion of 1979.
Kind of a straw man there. The criticism by Sobchak, which mirrors the general criticism, is not the "what" (concluding the mission in Afghanistan) but the "how." President Biden asserts, laughably, that there was *no* better way to leave then the way we did, or else he moves the goalposts to suggest that those who suggest he royally screwed the pooch on the "how" are arguing about the "what." They're not, and it's dishonest to say they are.
So what better way was there to leave? The WSJ article puts forth no alternative proposal in the non-paywalled portion, even though it's 5 paragraphs long. The moment the US started withdrawing, the Taliban was going to go on the offensive. Judging by recent events, they would have quickly captured the country. It wouldn't have mattered if the US left in 2021, 2030, or 2085. There are people who say Biden pulled out too quickly, but they rarely say how long he should have stayed, if 20 years wasn't slow enough.
Biden did the right thing. A lesser president would have balked at the prospect of the Taliban capturing Kabul on his watch and kicked the can down the road, as the previous 4 presidents did, causing more death and destruction on all sides. Biden ripped the band-aid off quickly and took the political hit for the benefit of the United States--and ultimately, for the benefit of Afghanistan.
Well, at the very minimum, Biden could've ordered all of our equipment and materiel to be destroyed, rather than just allow the Taliban to take it. But that's a pretty low bar; in general, a more gradual withdrawal, covered by continuous airstrikes, would've been more effective (though admittedly more expensive).
Yeah no. I'm pretty sure CENTCOM could have come up with a plan that was loads better than what actually happened. Give me command of 30,000 Marines and associated equipment and even I could do it better.
What plan would that be? What experience do you have commanding large armies? Which wars have you fought and won? Just asserting that you can do better than the professionals of the world's most powerful army isn't very convincing.
I have no experience at all. Which is why I said "even I could do it better." I'm 100% sure the "professionals of the world's most powerful army" could've done it a million times better -- had the President allowed them to do so. But as many stories in the media will tell you, he did not, so all their planning chops were in the event moot.
I think every person working for Central Command, from General McKenzie, could have individually come up with a better plan for withdrawal from Afghanistan; but the organization itself might not.
It's structural: The service chiefs by airplanes, ships, train personnel, do all the work of building a military.
(Another time, we can talk about how this is an incentive for the service chiefs to waste money on useless crap like LCS.)
Then the combatant commanders compete with each other to see who can get the biggest slice of that pie. Aren't they supposed to be competing with China and Al-Qaeda and drug dealers, instead of each other?
I know the seriousness of the charge I'm making (and I really want to emphasize this is a *Meditations on Moloch* style bad system, not a matter of any individual being dishonorable).
But the logic is inescapable. If Central Command's priorities had been twisted by bad incentives to maintain or increase justification for manpower and material Indo Pacific Command also wants, that is it incentive to spend less effort building an Afghan army that can stand on its own and more effort giving them support only we can provide like close air support or air-dropped logistical delivery.
Most importantly, it's an institutional motivation for CENTCOM, the United States military command responsible for Afghanistan, to always be putting off asking the Afghan army to defeat the Taliban; to always find a reason to stay; to never quite be willing to prepare to leave.
And I know that's a shitty motivation and doesn't make Central Command look great. But Central Command has been overseeing all US military activity in the Middle East for the entire time we've been in Afghanistan. If Im mistaken about the combatant commander having an incentive to prolong the conflict, that's means they we're trying to defeat the Taliban the whole time...and failed
What cities in the US would approximately meet these criteria?
Deal makers/breaks:
- Must have good weather (= sunshine) almost round the year (e.g., Miami).
- No snow.
- Must not be in H(awaii)ST (Also, e.g., Miami)
- Must be located in an urban or urban-adjacent location (e.g., San Antonio, TX)
- Must not be poised to get a lot worse climate-wise over the next 10-20 years (excessive heat, more storms, more rains, or floods).
- Must not sit on a geographical fault-line that could going to go boom someday "soon" ;-) (e.g., SFO)
- The local politics must not be the "bubble" kind. That is, it can be right-leaning or left-leaning - but (a) it should not be extreme in either direction and (b) it shouldn't be filled with insular people who have no idea how to get along with 'the others'.
- Must have fiber optic internet generally and easily available.
Strong preferences:
- Should not be in EST (e.g., Miami, NYC are all too far away from the tech-coast, time-zone wise. Might work though).
- Should be near a major or a minor tech center (there used to be only 2.5 tech-centers in the US namely SFO, SEA, NYC, but there are more now). If any of FAANG has a local development office, that's a very good sign.
- Should be cheaper than SFO, SEA, NYC :-)
- Should have direct flights to SFO and SEA (this is just a corollary to 'should be near a minor tech center')
- Should have highbrow culture (theatre, music etc.).
- Should have Amazon Prime easily available in most places ;-)
- Should be low crime, culturally diverse, and expected to sustain these characteristics over the long term (20 year outlook).
Have you thought about dangerous flora and fauna? I grew up outside of the range of most poisonous snakes, insects carrying dangerous diseases, and insects dangerous in themselves. The prevalence of hazards of this kind seems to be increasing over most of the continental US, partly due to non-native pests spreading, and partly due to warmer winters. And unlike larger dangerous fauna (bears etc.) the smaller dangerous pests often do quite well in (sub)urban areas - which is where you'll be if you want highbrow culture and a good internet connection.
So what's wrong with snow? Says the guy in Saint Paul. :)
I know. We are all so very different. I've aged out of winter camping in the BWCA but watching the northern lights after the sky clears with zero light pollution is pretty damn fun.
My body does run pretty hot though. The high summer humidity in the Twin Cities is a much greater burden than winter cold and snow for my particular metabolism.
There is a low lying area near my old home town where a state record -62F was set in the late 1990's. At the time I was working with an engineer from Novo Sibirsk. He had no idea that North America got that cold. Blew him away.
I've spent time outdoors in temperatures down to -50 F in northern Minnesota.
<Not Irony> The experience was pure invigorating fun. <\Not Irony>
At any rate, I wish you good luck finding your own climatic and cultural sweet spot.
Life is a bit more complicated (and correspondingly a lot more fun) when factoring in a significant other ;-) If I were single, this would be easier. If my spouse had no roots in the US and were an immigrant like me, perhaps this would be easier as well. If she didn't have strong opinions or a career to think about, this would be easier yet. If we didn't have to optimize for future needs like older in-laws potentially moving nearby, things might be simpler. And so on...
I've spent plenty of time in the cold, from childhood trips to Badrinath the Himalayas (https://en.wikipedia.org/wiki/Badrinath - I don't recall the oxygen levels being a problem, I was running around and didn't skip a beat, no idea why everybody thinks 10k ft is a big deal) to walking around in downtown Montreal on New Years Eve in -30C ;-) I've also spent plenty of time in Equitorial places in 105F heat. I've become soft and comfortable now, so that's what I'm looking for next :D
I’d love to see Badrinath myself sometime but my SO - wife of 39 years - has some strong opinions about what she is willing to eat. She needs to be within striking distance of a Whole Foods or Trader Joe’s to keep her tummy happy. Vacationing in India is a pretty hard no for her. Oh well, she has made me a very happy man for a long time so I wouldn’t dare complain. :)
Miami's weather in the summer (which lasts 6+ months) is pretty bad. The heat is oppressive and there are daily (brief) thunderstorms. A matter of taste, plenty of people do prefer it to colder climates.
Raleigh is interesting. It's a great idea and seems like an emerging tech corridor. I have to research its potential a bit though in terms of how it will fare over time. I think Atlanta has been getting a lot of recent tech investments/hiring in the south and I wonder if Raleigh is going to keep on being a university-city sized tech-town...
Raleigh will definitely continue to grow. Look up the AI/ML campus that Apple is building. UNC comp sci is great, as is Duke's department, and its a great place to live (driveable to NE and south, good airport, cheap COL, good housing, minimal culture war, close to the outer banks and the NC mountains, amazing in-state schools and tuition deals, good weather), so high human capital people will stay if there are jobs (and they already are). Google also has teams in Raleigh/CH because of all the comp sci profs. I would take the pair trade of Raleigh v Atlanta over the next two decades in a heartbeat. If I were going to start a co in a non-target city I would prob do it in Raleigh.
"Must not be poised to get a lot worse climate-wise over the next 10-20 years (excessive heat, more storms, more rains, or floods)."
Alarmist rhetoric greatly exaggerates the speed of climate change. Global warming so far, since 1913, has been less than one and a half degrees C. It may have speeded up more recently, but in 10-20 years it is unlikely to change the temperature of any city by enough to be visible through the noise of random variation. That fact is obscured by the tendency of the media to blame any unpleasant weather on climate change. Similarly, the high end of the IPCC sea level rise projection for the end of the century, as of the 5th report, was about half the difference between high tide and low tide — there are not many places where that makes much of a difference, and your concern is with a much shorter time period.
As annoyed as I am by climate science deniers, I do think the media has often exaggerated the threat. For instance, the common ECS estimate of doubling CO2 causing 3°C warming is something that is supposed to happen over hundreds of years, and the media often leaves out the "hundreds of years" part while failing to report the shorter-term warming expectation (TCR) which is more in the neighborhood of 1.75°C. I am more concerned with the effect of warming in more southern locations that didn't cause the warming - the Philippines, Central America, etc.
I would also add that I perceive the climate where I live - very close to David Friedman - to have gotten somewhat worse in the 24 years I've lived here. I use air conditioning more. I experience more smoke days. (None noticeable in my first decade here.) I'm not sure whether drought being more common is climate change or coincidence - I moved here during an unusually wet year, which somewhat affected my expectations. But watering restrictions are now a constant presence.
It's also changed (for good or ill), such that plants that used to be grown commercially here no longer set fruit longer reliably enough to be worth growing locally. (Of course growers would probably have moved anyway, since (sub)urban growth means the land is currently more valuable with buildings than with orchards.)
What we haven't seen locally is catastrophic levels of change, though some of those burnt out of semi-local areas may disagree. People are not dying of heatstroke in extended periods of bad weather, except for the usual desert hiking mishaps (not all that local to me). People aren't being flooded out of their homes. We aren't having once in a century major storms. We're still only seeing those things as news coverage.
The only thing I know of that A) was grown locally and B) has some trouble growing now is lady apples, but they were (and presumably still are - you see them in the grocery stores) grown in orchards in the hills, and the only place I've seen them have trouble growing is our yard (great harvest this year though - and most years, I'd guess our tree only fails to set fruit one year in three or so) which is... not in the hills. It's a significant climate difference!
There's definitely less good fresh fruit around than when I was a kid, but as far as I can tell it's mostly a "we need the land for houses" problem (so the farms get pushed farther out), not a "the tree won't grow/bear fruit here" problem, with an added-in "the Cosintino brothers retired, and none of their kids wanted the business, so the best produce shop closed despite being profitable" problem. (Why none of the local chains expanded to fill the niche I can't answer - my best guess is that sourcing good produce is hard.)
Are you thinking of a Central Valley problem, maybe? Because Silicon Valley was Prune Valley, and all kinds of plums and apricots still grow here really well - cherries too. It's just that's not the land's main value now. But I don't know the Central Valley as well.
I was thinking of cherries, and my understanding there was that they'd gone unreliable, though of course there was the double whammy of people wanting the land for houses, offices, etc.
We get a reliable crop from every cherry tree except one of the two Dad planted without knowing if they would bear in our zone (answer: one yes, one no) every year. We'll have a reliable crop from that one too when Dad finishes mastering grafting. I don't know that we're using the commercial varieties, mind, we have less variety in cherries than apricots or peaches, but my understanding is our sweet cherry was dead standard when we put it in, which was probably between twenty and twenty-five years ago - it was one of Dad's first. It's had various health problems - it's getting to be an older tree and we're a lot better at planting than at troubleshooting problems, especially fifteen feet above our heads - but fruit set has never been one of them.
It's possible someone was planting marginal varieties (but why? A variety that's seriously marginal now was mildly marginal twenty years ago, nobody but crazy amateurs* plants trees that won't bear even one year in ten), or that it's a problem in Gilroy, or somewhere else hotter than here - my Gilroy friends who have fruit trees do fine *but* the only cherry I know was bought recently so is presumably a modern variety - I don't know any 20-year-old cherry trees anywhere but here. But um, at least one point of evidence against.
(... and if you ever want to plant your own cherry tree, I promise you should be able to get a good one that will still bear perfectly. No such promises on apples, though; our successes are all weird so no good data on exactly how warm they go.)
Water restrictions are more common for a simple and obvious reason: California's water infrastructure was designed in the 50s, mostly built between 1960 and 1975, and hasn't had any major addition at all since 1997. Meanwhile, the state's population grew from 16 million in 1960 to 30 million in 1990 to 40 million today. Simple math tells you the rest.
The *worldwide average* warming has been 1.5 °C, but there's much more warming over land than over water, and at medium and high latitudes than near the equator, so the amount of warming on temperate-zone land has been more like 5 °C, which is nothing to scoff at (it means that 35 °C days are as common as 30 °C days used to be, 30 °C as common as 25 °C used to be -- actually even more than that, as the variance has also increased as well as the average)
Er, the worldwide average warming has been more like 1.1°C with 1.6°C over land.
Generally those figures look much too high to me, but I would say scientists are bad at marketing, otherwise we'd be talking about the Paris target of 5°F land warming instead of the equivalent 2°C global warming.
I will continue to work on learning more about this, although I'd been under the (perhaps mistaken) impression that I was both well read and clearheaded about the topic of climate change.
Perhaps my time horizon should be longer than 20 years, and I should think about a 40–60-year time-span.
Sea level is not my only concern, although it's an important consideration. Things like drought/water table (and relative population vs water consumption, think problems in CA), abundance of electricity availability (think TX and local problems specific to its grids) etc are also of concern to me. Some of these can be overcome (TX for e.g, one can just install a generator) but others are harder to overcome (like drought).
If you're looking at shorter time horizons, drought is still something to worry about. The Colorado river has been overdrawn from for the past century, and the latest droughts (which may or may not be exacerbated by climate change) have not helped that situation.
On the other hand, I'm uncertain if that affects your average city-person (it seems like a much bigger problem for farmers). Water regulators might start metering and charging for excessive use, but I don't see Americans in major cities dying of thirst.
Buying a generator may keep your AC on, but a power outage means you're not getting internet. I'm assuming your career is computer-related (since you specified fiber optic internet as a deal breaker), so maybe put weight on avoiding counties/states that can't keep the lights on.
Thanks - I'm open to it. For e.g., sporadic snow may be something I'm willing to handle. I live in the Pacific NW now and have live in British Columbia near the US/WA border in the past, I can easily work with some snow each year.
For e.g., I've been toying with Austin, TX and nearby places as a potential option. It meets most of the expectations, expect it gets some snow, power is a bit wonky but can be worked around if willing to own a home generator. Climate is the big question-mark in Austin and makes me hesitate.
As DLR has suggested, Tucson is a nice idea, and Melvin's suggestion of San Diego also seems like a good choice modulo wildfire smoke problems (which can be worked around a bit with full A/C in a house).
I used to think that Portland, OR could be a nice option, until Portlandia stopped being a parody :-( I used to really enjoy driving down to Portland, NewPort etc. to see friends and it was such a nice place 10-12 years ago. Sigh.
In terms of heat, drought, fire and cost of living, Oregon is well on its way to becoming Northern Northern California. To me that's much worse than it being a political/cultural bubble.
Santa Fe probably gets too much snow -- a half dozen or so snow events, sometimes of a foot or so. I'm from Tucson and like it well enough, though it's pretty hot, bigger than I'd like, and water is an issue. The mountains are close enough for relatively easy summer day trips, which is nice.
Anything north of LA in CA is likely to be negatively affected by wildfires in the coming years: dense smoke, power blackouts, freeway closures, and, of course, the actual fire (depending on location). These negative effects will be sporadic, however.
Thanks Melvin & Bugmaster. I visited San Diego years ago and enjoyed it, and my wife was on a conf trip and also liked it well enough. It has great Italian food which is a huge plus ;-) for us - we'll have to think about it some. We're in PNW so already familiar with fire+smoke problems - that's definitely a downside. Also worried about drought + water availability problems long term. I grew up in Chennai/Madras and never want to deal with water availability problems in my life again if I can avoid it (https://en.wikipedia.org/wiki/2019_Chennai_water_crisis was the culmination).
ACX has covered progeria and aging before, so perhaps some will be interested in a new paper that helped co-author that investigated the underlying cause of disease. Hope you enjoy!
I'm in the UK and, like most, received the Oxford-AstraZeneca vaccine. It seems pretty clear that the UK government has given up any pretense of trying to control the disease, despite the country being solidly in the middle of a third wave of infections. I'm fortunate to have a job that can be done from home, and to have had family and friends to live with (so I've not died of loneliness), but until the pandemic my life completely revolved around the swing dance scene: I danced and taught classes several nights a week, and spent most of my annual leave travelling to dance events around the world.
Obviously everything was cancelled, and for a long time, but now dance events are beginning to appear in my calendar again and I'm trying to do the risk/benefit analysis of doing what I love vs. staying safe.
On the one hand, when I put some (pretty conservative) numbers into microcovid.org, it's clear that partner dancing—even on an outdoor, fairly sparsely-populated dance floor—is a REALLY bad idea right now, even at a 10%-per-year "risk mitigation" budget. On top of this, I know quite a number of friends (including one of my teaching partners) who were unlucky enough to catch COVID back in ~March 2020 (despite precautions) and have still not fully recovered from it.
On the other hand, catching colds is an occupational hazard for partner dancers and a risk I have alway been happy to accept—and so far everyone I know who's caught COVID post-vaccination has been either symptomless or has had a minor illness for a few days and then been fine.
It seems plausible that if a vaccine provides a good level of protection against serious illness then it might also greatly reduce the chance of suffering long-term effects. What does the science say?
Many event organisers have requested that participants take lateral flow tests before coming, but few have enforced it—especially for outdoor events that are open to the general public.
You must have some pretty different assumptions about what an outdoor dance looks like. My guesses lead to it reporting an eye-watering 4% chance of catching COVID in a single 2h event. This seems obviously wrong, since if the risk were that high I would certainly have heard reports of new dance-acquired infections from recent events here—but it is not immediately obvious where the computation has gone wrong.
I assumed fewer people close to you, MRNA vaccines which have a higher multiplier, and silent/normal level of talking (do you talk to people loudly while dancing with them?) LFT requests probably give a decent multiplier too, depending on how many people you think comply and to what extent noncompliance is concentrated in risky people.
> Do you talk to people loudly while dancing with them?
It's not unusual—especially if you haven't seen them in 18 months!
> LFT requests probably give a decent multiplier too, depending on how many people you think comply and to what extent noncompliance is concentrated in risky people.
One thing that makes me and some of my friends quite nervous is that, at the moment, the most cautious people are still staying home, while the people who are going out dancing are (by definition) less cautious—and therefore potentially at above-average risk of having caught COVID. A remarkable demonstration of this phenomenon was provided by one local dancer who had given a sample for a PCR test on a Saturday morning in preparation for international travel, danced outdoors on Saturday afternoon and indoors on Sunday evening, and then learned on Monday morning that they had tested positive. Fortunately they 'did the right thing' by posting about it conspicuously on one of our local FB groups, so that people they'd danced with could get themselves tested, for which they were quite rightly praised—but when it subsequently coming to light that this person had elected not to be vaccinated reactions were decidedly more mixed: most were astonished and appalled, a stalwart few applauded, and the bayesian reasonsers figured, "well, that figures!"
This scenario has all 20 people listed as less than a foot away from you, which is very implausible. More realistically you should enter this as two scenarios and add the risks: one person a foot away, and 19 people 6+ or 10+ feet away.
The base scenario (2h on the dance floor with average person 3m away) then works out to 1000µCov, and it's another 700µCov for 8 minutes (~2 songs) dancing with one person. If I dance with 10 people (about 2/3rds of the 2h event) that gives a grand total of ~8mCov, which seems plausible but still a bit risky.
I know at least two partner dancers who'd had 2x mRNA vaccines (likely sometime in Apr/May '21 timeframe) who got breakthrough infections. The best I can tell from social media, neither suffered from long COVID. One of them was in early 40's and very healthy/active, other not sure what age group - likely late 30's to mid 40's range.
(c) is a bit of a weak-link, but I've read a bunch of other anecdotal studies that the attack-rates might be similar for both vaccinated and unvaccinated cohorts, so I'd love to see evidence to the contrary - that would make me feel better (assuming the evidence shows lower attack-rates for vaccinated cohorts).
And finally I've seen some anecdotal evidence suggesting that long COVID happens irrespective of vaccination status. I've also read arguments suggesting that when someone is vaccinated, their high neutralizing antibodies will keep infection levels low and thus COVID symptoms would be minimal, and also means long COVID unlikely. The premise here is that COVID symptoms and long COVID severity are correlated. I haven't found good studies that substantiate this intuitively appealing and logical claim.
What mechanism exists within the medical billing establishment to prevent hospitals, providers, or drug/device companies from generating arbitrary billing? E.g. anesthesiologists seem to always be sending us bills long after the fact, either from our children's c-sections or my umbilical hernia surgery, demanding payment. In the case of the c-sections I am certain the one claiming payment wasn't there. Likewise Apria just seems to periodically sprout new bills for a cpap machine long paid off every time I change insurance companies.
About as often as not we argue with the billing entity and they say "Whoops, I guess we screwed up", but the other times they threaten to send the bill to corrections. That of course has rather negative consequences down the road, even if you win in a small claims case. Anyway, getting off track, sorry.
So it seems that there is very little stopping a medical provider from just making up bills to send to people, either by mistake or malice, and very little one can do to fight those bills, despite there being no evidence that the bills are justified. Am I missing something here?
I've downloaded the entire Covid dataset from [ourworldindata](https://github.com/owid/covid-19-data/tree/master/public/data), cut out all countries with <10000 overall cases, bucketed the remaining ones into three groups based on median age, and then made XY scatterplots, one data point per country, with X-value = total % fully vaccinated and Y-value = [avg new cases in the surrounding 5 day area]. (Both at a specific date only; I've tried 2021/06/01 and 2021/07/01.) The correlation is positive in all cases, i.e., countries with a higher % of people fully vaccinated have more cases. It remains positive if you add a 3 week delay to cases.
What's going on here? Anyone got a simple explanation? (Also, would you have expected this outcome?)
While I'd like to see clearer evidence for that claim, the explanation that comes to mind (if true) is that, time after time, whenever there are signs that the situation is getting better, there is a tendency for governments to immediately make things worse by loosening restrictions. In my home base of Alberta, for instance, they apparently saw that 50% of the population was vaccinated and decided that it was time to end the mask mandate (Delta? What's Delta?). As vaccination rates rose, they doubled down with new rules
> close contacts will no longer be notified of exposure by contact tracers nor will they be legally required to isolate — although it still recommended.
> The province will also end asymptomatic testing.
The conservative government backpedaled somewhat when Covid cases predictably shot up in response.
In addition to "the same countries can test and can vaccinate" point that other people are making, there's also the fact that some countries choose to double-down on the non-pharmaceutical interventions route (e.g. Australia notoriously has a slow vaccine rollout but a strong lockdown, I think Japan and South Korea might be in the same bucket?).
I think you would also need to plot in how much testing each of those countries do in order to get any kind of useful data. The countries with the most vaccines are also the countries doing the most testing. Lots people are tested in the US not because they have symptoms but because their job or school requires it on a weekly or even daily basis.
Also, countries that got the vaccine widely distributed also felt safe removing all the social distancing and mask-wearing restrictions (such as in the US), which essentially just let the virus rip among the many remaining unvaccinated.
Sure. Ask yourself what fraction of COVID cases are actually reported and available on the Internet in the United States versus, say, Iran, Mexico, or Burma. That will tell you why the countries with the highest vaccination rates are also reporting some of the largest number of cases.
What is that supposed to be a chart of? Canada is X=6 Y=23 which means what? Whatever it is, outliers like UK might be having an outsized effect. I've been wondering why the UK has had so much Covid, any ideas?
It's X=percent fully vaccinated, Y=new daily cases per million, averaged over a 5 day period and with a 3 week lag. The lag was probably unnecessary as I said in the other comment (and UK is no longer a big outlier if you remove it).
Actually it kind of does when you remove the delay, which is does probably not make sense given that the x-axis tracks fully vaccinated people. Nonetheless, the effect is not nearly as clear-cut as I was expecting before I made these.
I've also created graphs for new deaths rather than cases, they all have a bit weaker correlations but not as much as you would predict. (Rich countries only and deaths gives a negative correlation, but not a particularly strong one)
A delightful example of Simpson’s paradox, with x = vaccination status, y = measured cases, and the problem being that measured cases goes up as measurement capacity and vaccination status go up with development. (If that’s why)
I've heard a lot of stories (including a recently popular twitter thread) about children claiming to remember past lives, down to factories they worked at, the names of their parents, or details about the past (a certain building that used to be a different building).
In Tibetan tradition, the Dalai Lama is chosen in part by having the candidate choose their toys from a previous incarnation. Of course, it's entirely possible that this process be manipulated. The number of adults who claim to remember anything about their previous lives is very small. A credulous defender may argue that society conditions people not to make claims like that.
My priors tell me that this should not be believed, but I'm curious: what's your take, not necessarily on reincarnation, but on memories from prior lives? What proof would convince you?
Possible if we're talking about ancestral memories, but unlikely per se.
More likely that the brain is doing something weird, like it does *all the damn time*. Like, simulating an alternative life isn't even that unusual, given stuff like dreams and reading fantasy novels, or how we rehearse imagined conversations all the time.
People remembering things that can be verified and not available right now. For example someone claiming to be a reincarnation of an ancient king and locating several previously unknown archeological locations.
Or digging out their stash of gold coins.
And this repeating multiple times. One can be an excellent archeologist and have weird spiritual beliefs so it is not foolproof. Or they could plant this coins earlier. But at least it is not fakable by looking at all photos.
But I would not bother with verifying it on my own, it is about as likely that Harry Potter was documentary.
This simple experiment would convince me: I whisper a secret into subject A's ear. I then kill subject A. Subject B then repeats to me the secret. This experiment could be ethically performed using those about to be euthanized or executed.
Problem is they probably don't remember all the specifics, and you don't know if their soul or whatever you want to call it, ends up in a predictable location.
Yeah, the purpose of this experimental design is for positives to be convincing. To correct for the low power, you could just repeat it a million times or whatever.
It would take a while though. You would have to create somewhat memorable but weird enough phrases and visuals so that they are uncommon enough. And you would have to keep it secret, which would be hard to do So every time a person is dying, you bring a posse of clowns and let them dance and say weird phrases. And then have those clowns sign non disclosure agreements and threaten to sue them if they tell anyone.
And then you would have to ask a lot of parents of children claiming past lives to try and get specific information, and match that against your secret database of those artificial experiences given to dying people. And if you get a high enough % of children describing these weird experiences, then it would increase the probability of past lives? As long as the sample of children asked is not large enough so that randomness could explain why some got it right by accident.
And I have put way too much thought into this already.
We don't really have any idea what things would be memorable post mortem, do we? So I'd just go with randomly generated "correct horse battery staple" type phrases (just making them long enough to make false positives extremely unlikely) and then have a web site where the first person to enter a correct phrase would win a billion dollars or whatever.
I probably could not be convinced, as any information such a person could share that could be corroborated could also be gamed. If someone could teach them the information, then that's going to be the more likely reality. I suppose the strongest proof would be for them to describe something that no one living could have seen, for instance a sealed room or perhaps something extremely remote. The key would be that no one living could have known what was in the room/place, and then after the prediction it could be verified directly. Even that kind of scenario leaves a lot of room for someone to game it. In fact, that's something most of us would scoff at if a famous magician offered to do it. We wouldn't find it fantastical enough to even be interesting, as it would be so easy to fake.
In Western societies, neither materialist nor religious beliefs allow room for such a belief to be very likely. Materialist approaches would point out the lack of mechanism by which memories could be transferred. Christianity has very specific beliefs about what happens to your soul, to the exclusion of other possibilities.
I’ve been trying to frame Afghanistan in terms of the trolley problem. Here is my result, and I’d like to hear comments/feedback. Obviously, it has two semi-intentional features-not-bugs, namely (a) it only captures a subset of the scenario and (b) it’s a bit reductive.
The Afghani Time Travel Trolley Problem
=====
There is a runaway trolley barreling down the railway tracks.
Ahead, on the tracks, there are a few thousand civilians tied to the tracks, as well as a couple of hundred soldiers, and 45 bn US dollars worth of arms + supplies. When the train reaches them, most of the civilians will die, the goods will perish, the arms will be stolen by enemies, but the vast majority of the soldiers will probably escape only with injuries.
You are standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different set of tracks.
However, you notice that there are 20 million women tied to the other set of the tracks and unable to move. If you pull the lever, the trolley will switch to those other tracks and will head straight for them.
Instead of killing them, it will transport the women to 9th century. In the 9th century, these women will have to wear burqas, give up most of the common freedoms women have had in the 21st century, and become chattel slaves to the men in society. Many of them will die due to maladaptation and shock. But overall, most of them will live.
You have two options:
1. Do nothing and allow the trolley to kill a few thousand civilians, destroy 45 bn dollars of stuff and injure (or let die) many soldiers.
2. Pull the lever, diverting the trolley onto the side track where it will make near-slaves out of 20 million women and their female progeny.
Which is the more ethical option? Or, more simply: What is the right thing to do?
Isn't this just an elaborate way of asking, "would you sacrifice millions of women to save some soldiers (but not equipment)" ?
The whole point of the trolley analogy is to render an ethical problem down to its most basic terms; it stops working unless the setup is cartoonishly simple.
In real life, the decision-making process is never that simple, since you are forced to reason under uncertainty. All of your decisions are probabilisitc, and the tradeoffs are never crystal-clear. In addition, in this specific case, the decision is not between "save some soldiers" vs. "save all women", but rather, between "continue losing lives and money every day" vs. "hopefully save a few women some of the time over a generation if not a longer period of time". And this doesn't take into account the global geopolitical situation, long-term effects on terrorism, the ethics of cultural hegemony, etc. etc. You'd need to spawn a whole trainyard of uncertain trolleys if you wanted to illustrate this problem.
For the $45bn figure, you'd be talking about the per-year cost I'd gather? It would still be understated, if so. Afghanistan was a >$1bn/week adventure, lasting decades.
For the returning-women-to-the-9th century, you mean returning them to the 20th century, right? 20th century *Afghanistan*, of course. Apples to apples.
And which fraction of the women are we discussing here, even? Yes I know that there has been a surreal two decades where some Afghanesses could enroll into Critical Gender Theory studies at university, the whole show essentially bankrolled by foreign taxpayers at an absolutely ludicrous overall cost. But even during those two decades those were mere oases in Afghanistan -- modern liberal versions of Potemkin villages, if you will. The western invasion did not change the mentality across the overwhelming majority of the country by *one inch*. If the astonishing speed with which the whole house of cards collapsed after the US military withdrew has not awoken you to this fact, I'm not sure what else could. You see, the Taliban are not genius military strategists, achieving this record-breaking reconquista with some new and unstoppable military innovations such as the phalanx or the blitzkrieg. They are simply an organized faction that's *way* more aligned with the mentality of the overwhelming majority of the population, than the western invaders ever were. They encountered virtually no resistance. The US leaving decent modern weaponry to a small minority before pulling out, has not made much of a difference.
For most women there, things will not change that much in 2021, simply because things had not changed that much since 2001. But at least now Western Academia can hold a string of parties to celebrate the fall of that bastion of vile western neocolonialism, the defeat of those horrible supremacists who openly held their culture to be superior to an equally-valid indigenous one, and who used mechanisms of oppression to impose their values on a subjugated people -- some of whom have even ended up internalizing the opressor's worldviews. (OK admittedly I might be wrong on that one, as I'm generally too dumb to understand modern Western Academia)
If we only consider women in Kabul (~2 million) and $60bn / year, then in purely monetary terms this gives $30,000 per woman-life-year of being able to study critical gender theory vs. living in the traditional Afghani society. This is within the range of things considered cost-effective in the Western world, although not necessarily cost-effective globally.
Are you quite sure about your denominator in the calculation which gave you the "cost-effective estimate by Western standards"?
How many of Kabul's female population (cca 2.3 million, ages 0 till death) would be able to go to university, regardless of the curriculum taken?
No, bar that; how many women *had* been going to university while the US-propped government was in place?
(sidenote: a UNICEF report that predates the Taliban take-over accounted 3.7 million children country-wide out of school, 40% boys 60% girls - in a country of 39 million people overall)
OK, bar that. Let's not even insist on university. How many women's lives were actually changed, in terms of them having an independent source of income from their work - even including women without university degrees?
The denominator will shrink to *quite* less than 2 million, I'm afraid, even with all those extra inclusions. And the quotient will increase to *quite* above the yearly salary of the average American who got taxed to make this happen.
I am confused about the factual claims you're making. Are we looking at the same plot, https://tradingeconomics.com/afghanistan/labor-force-female-percent-of-total-labor-force-wb-data.html? It shows a jump from ~15% to ~22% of the workforce being female; I can't find a way to change it to absolute numbers (maybe I need a subscription to do that). To be consistent with your numbers, this would require the entire workforce to be about 12 million out of a 40 million population now (and 5 million out of 20 million population in 2000). That doesn't match my intuition for the Afghani unemployment rate? Tradingeconomics (https://tradingeconomics.com/afghanistan/unemployment-rate) asserts that unemployment rate has been somewhere around 10%, and the 2020 Afghanistan population pyramid (https://www.populationpyramid.net/afghanistan/2020/) takes ~25% of the population out of the workforce, but that still gives me (22% - 15%) * (75%) * (90%) * (40M) = 1.9M women who are employed now and otherwise wouldn't be. (At, therefore, foreign taxpayer cost of $32,000 per year per woman.)
As Scott pointed out during lockdown analysis, in the West medical interventions are considered cost-effective up to ~$100,000-$150,000 per quality-adjusted life year. That, too, is a very large number; part of my point is that very expensive interventions are sometimes considered cost-effective.
We're looking at different numbers. You're giving percentages of female-workforce / total-workforce. I gave the total-female-workforce numbers -- as said, 0.8M in 2000, 2.4M in 2021.
Ignoring the doubling of the population, you'd be looking at a delta of 2.4M [@2021] - 0.8M [@2000] = 1.6M more women employed.
NOT ignoring the doubling of the population, the apples-to-apples delta would be 2.4M - 1.6M = 0.8M, the figure I used.
You're making unfounded presumptions about which portions of the population pyramid do or do not engage in work in Afghan society. You might be copying criteria that makes sense in the west, and that criteria can wildly miss the mark in Afghanistan. Published unemployment figures are practically useless to determine labor force participation, even in the west.
Scott's figures of $100k - $150k per QALY are taken *intra-society* -- i.e. if Society S has a product P per person per year, then we're discussing exchanging 2*P to 3*P for a QALY. In plain english, US has a GDP per capita around $50k, and two or three years of such product could be spent to give a US citizen one more QALY.
Fair enough. But perhaps more typically relating to the final few years of one's life "reclaimed from nature", than to an open-ended decades-long arrangement.
Afghanistan's GDP per capita was $330 before US started pumping in the billions, and went up to $570 after it did. No "k"s involved.
I'm not asking you to multiply those GDP figures by 3, I'm asking wherefore do you assert a duty of a US taxpayer to fund Afghan QALYs while applying a calculation that is based on the US economy?
A reasonable amount of stability was ensured with a much smaller fraction of troops as you can see on that graph. So I think the real cost is possibly at least 5x lower. And arguably even lower if you consider that those soldiers and military equipment might be stationed somewhere else now. It isn't like all those soldiers were just fired over night.
So if you consider the above, than the marginal cost of keeping the Taliban out of Afghanistan might be in the thousands of dollars not tens of thousands. Spread out over multiple countries.
brown.edu puts it somewhere north of two trillion, in an estimate that is slightly more substantiated than the graph you pasted above.
I agree that some of those expenses would exist even without the Afghan engagement, and there could have been some overestimations -- so I'm operating with about half that figure in my other comments.
The CRT comment sounds facetious at best. In my experience at least, people don't go from "I have nothing" to "I want self actualization", they usually go through a few intermediate steps that often looks like "I want to better my people" (I want to become a doctor/lawyer), "I want to gain *lasting* status/money" and so on. CRT in specific, and literature/history/sociology style humanities in general, rarely pave the path to enduring freedom for first generation oppressed individuals in 3rd world countries. This is not to say that art, history, humanities can't contribute to these struggles - they can - but investing in formal education pathways along art/history/humanities specialization is not how people (with very little, who've experienced duress most of their lives) tend to spend their one shot at betterment.
Even in an improved Afghanistan, I'm generally skeptical that society would allow girls to study beyond high school/10th grade very often. If you want to keep fine-tuning the cost-model, then you'd have to account for the fact that mere high-school education costs upwards of 75k USD/year for many girls out there under the "NATO/US militaristic help" model, assuming all of the military expenditure's goal is/was women's education (and nothing else).
I wouldn't model success in terms of degree acquisition alone BTW - that doesn't sound like a good measurement. To me, things like basic literacy are excellent indicators to look at. Next, female lifespan is another indicator (which ought to correlate well with increased literacy levels), and infant mortality rates (also gets better with high maternal literacy and well being), the role women play in informal employment sector (i don't know how we'd measure it but it seems important that we account for this), rates of child marriage (this ought to drop to count as good), education attainment levels in the next generation (increase in maternal education will have a positive impact across the board on the next generation) and so on. I don't have good references to show that these sorts of interdependencies ought to work, but these were all taken as a given in India in the 80's/90's when I was growing up there and formed the background tapestry of public discourse, and my sense over the years is that most of it worked the way it was posited.
But I also admit after all's said and done and some more clever math is undertaken, it could all still amount to a net RoI that's not very attractive to most.
It's fine if you wish to call my remarks facetious. I think I mentioned CGT not CRT, but I also mentioned I'm too dumb to understand modern Western Academia anyway. I don't want to linger on those remarks because they are hardly the bulk of my point.
The point is that forcing western liberalism as a value system has practically not left a *dent* on the mentality of the Afghan population at wide -- in spite of two decades of military engagement, plethoras of NGOs and insane amounts of money burned. In 2000, 8% of the female population was employed. Two decades and *over a million of millions of dollars later*, 12% are employed. I'm not that hard to impress, but I'm not impressed.
The point is how blind a large portion of the west is to just how *spectacularly* the project of imposing western liberalism has failed there. The cognitive dissonance is sometimes so bad that people have started manufacturing trolley problems which posit that no less than a full 100% of the ~20 million women in Afghanistan had been basking in the light of western liberalism, until the wild money hemorrhage has been stopped, causing them to return to 9th century Afghanistan. More like 20th century Afghanistan, and even that only for a fraction of the women; the vast majority of women weren't much affected by the western-liberalism bit anyway, but could occasionally be affected by the persistent-low-grade-warfare bit.
You say that Afghanis are oppressed individuals fighting for freedom. I'm not following, *who* do you mean they are they oppressed by? By western invaders? By some undemocratic forces? By themselves?
> Which is the more ethical option? Or, more simply: What is the right thing to do?
Haven't we just gone through a few centuries of colonialism and imperialism, and in retrospect, now see that this kind of thinking was wrong?
What is your confidence that how you've defined the two tracks is both precise and accurate? Frankly, I think it's near zero. Furthermore, does your argument not clearly extend to policing the whole world, and thus basically justify starting a new world war?
Finally, there are more than two tracks so your scenario is a false choice. For instance, it seems like it would cost significantly money and lives to simply help anyone who wants to leave Afghanistan. Then you can just leave it to the Taliban.
Ahead, fading into the distance, you see an infinite number of identical track switches along the current track, each one of them leading to either thousands of dead civilians, 45 bn dollars lost, and a couple of hundred soldiers maimed or dead.
The number of potentially time travelled women on the opposite track increases at every switch, but the tracks with the women always end at a trolly stop instead of another switch.
You can either let infinite civilians and soldiers die and let infinite money be wasted or you can at some point send some finite number of women to the 9th century where they will mostly survive.
>You can either let infinite civilians and soldiers die and let infinite money be wasted or you can at some point send some finite number of women to the 9th century where they will mostly survive.
I don't think we can discount their descendants from this, about half of whom would also be women in any given generation.
It seems pretty implausible to me that we can at all accurately predict what afghanistan will look like in even ~3 generations, and even more implausible that it would specifically be exactly what the taliban want.
How confident are we about how the switches in the trolley work? The classic trolley problem is good because a physical switch that moves tracks is a familiar device we can be confident in. The variation where we are told that a large person on a footbridge can be pushed off by me and will certainly stop the trolley involves overriding our physical intuitions about how these situations work (which may still be playing a role in our moral intuitions). With the Afghanistan trolley problem it seems likely that the switching circuits involve lots and lots of feedback loops, so that it's unclear whether option 1 will *also* lead to many women being forced to live in 9th century conditions, and also whether option 2 will *also* lead to the deaths of many civilians and the destruction of skyscrapers or other physical goods.
Many women live in 9th century like conditions today in parts of rural South East Asia and the middle east, yet we don't describe those parts of the middle east and SE Asia as a whole as regressive 9th century like societies. I concede that there is truth in your observation that #1 would also lead to many women being forced to live in a highly regressive society, but it would still be a flawed 21st century society with regressive features. Implicit in my trolly-problem framing is my fear and prediction that under Taliban, the society of Afghanistan would stop resembling a "21st century society with flaws" (at least for women) and become much more like a 9th century society. For me at least, there is a stark difference in values between those two kinds of societies.
How many of these trolleys do you have? One a year? One a century? One ever? (The last one seems optimistic in Afghanistan.) What quality of life do the women have while the trolley is running through the civilians, goods, and arms? (Civil wars, even 20th century ones, are generally not great to live through.)
I think this one got me thinking, and I'm going to settle on "one (over a long duration, equivalent to 'ever')".
In practice, what we've learned in Afghanistan over the past 20 years is that the quality of life isn't anywhere close to someone in a western society would come to recognize, but it was (is!) still much better than life under the Taliban before. At the minimum, for women, coerced marriages haven't been endemic (although I'm skeptical of reports that they're gone now - they still happen in places like India, so I doubt they're erased from Afghanistan in mere 20 years), women can go to school and work, and aren't forced to cover their faces and so on. I doubt these are universal personal freedoms (again, because these legal freedoms aren't yet universal personal freedoms in all of SE Asia, so Afghanistan couldn't have leapfrogged SE Asia in mere 20 years) but I think this was a significant+good start.
The way I see it, the trolleys themselves are metaphors for choices we make, and in this instance, they are about trying vs. not trying to improve conditions. The framing in Option #1 feels (perhaps is) overly specific to trying via one methodology (foreign, mostly US, coercion/"help"/troops). I don't know if direct militaristic intervention needs to be the 'try' approach, but that's the one I was most familiar with, and I felt like that's what would resonate with readers.
I'm trying to point out through the trolley problem that giving up on trying is much worse than not trying, or at least it shouldn't be so easy to pick option 2 over option 1, and anyone in favor of option 2 should really be in favor of a better "option 1" and saying "it's their problem" is really an awful thing to say - and because of those reasons we should work very hard to find different/improved ways of preventing Afghanistan society from falling all the way back to pre-modern times for women.
Now, maybe this point cannot be made as long as Option 1 stays in its current form (perhaps it's a socially/politically toxic option and presenting it in any form makes the discussion untenable), or maybe the lesson here is that the trolley framing isn't the right vehicle (;-)) for making this argument.
It wasn't helpful to answer your question head-on, or a make other risk/reward decisions carefully, but it added enough uncertainty to make me wonder whether getting an early booster was really worth the effort.
Since J&J is similar to AstraZeneca, I'd imagine that the study translates well to J&J, but I haven't seen studies that start with mRNA and then add viral-vector later on - these studies all seem to start with a viral-vector dose, then add an mRNA dose on top of that. Maybe it's reasonable to assume commutativity, maybe it isn't - hard to say.
OK, well, it's not complicated. In both cases you've got a bit of nucleic acid as the key cargo, and in both cases the nucleic acid specifies the structure of the S ("spike") protein* on the SARS-CoV-2 virus, which is what docks with the human ACE2 receptor and allows the virus to gain entry to the cell. It's the "lockpick" that opens the door, so to speak. The idea is that you'll fool the vaccinated cells into building a crapton of S proteins, which they will express (show) on their surfaces, and which will by complicated mechanisms lead to your immune system realizing some of its cells are infected and need to be killed, and by the way we should be on the lookout for this S protein thingy, which is the marker of evil invaders.
The first importance difference between the two vaccines is that the J&J vaccine uses the DNA that codes for the S protein, while the mRNA vaccines (Moderna and Pfizer/BioNTech), as the name implies, use messenger RNA**. The DNA has to get incorporated into the chromosomes of the cell, where it is then transcribed in the usual way to your own mRNA, and then translated to S protein by ribosomes. The mRNA vaccines skip this step by directly flooding the cell with mRNA transcripts. (That does mean the vaccinated cell has its chromosomes messed up, but it's about to be killed by the immune system anyway, so it doesn't matter.)
The second important difference is the delivery vehicle. In the mRNA vaccines they enclosed the RNA in a lipid nanoparticle, a bunch of ordinary fatty acids plus cholesterol plus some stabilizing derivatives of polyethylene glycol, a common polymer used in stuff that goes inside the body. This gunk surrounds the mRNA, shielding it from immediate degradation, until the lipid shell makes the particle fuse with cell membranes, releasing the mRNA into the cell. On the other hand, the J&J vaccine uses a hollowed-out wild adenovirus (a common form of cold virus), #26 in the list of families, called Ad26. The DNA is inside the adenovirus, which gains entry to cells in the usual way such viruses do. The important distinction here is that if you have ever been exposed before to Ad26, then you may have some pre-existing immunity to that virus, in which case, it may well be destroyed before the vaccine has a chance to do much -- this has happened before, and is why J&J chose a more unusual adenovirus, to which fewer people have already been exposed. It's also why there is just one shot -- once you've used Ad26 as a vector, you can't really use it again, because by then by definition your patient has been exposed to Ad26 and any future vaccine based on it won't work very well.
It's also the case that the remnants of the vaccine are disposed of slightly differently. The remnants of the Ad26 are just random proteins, and get chopped up and digested in the normal way. The normal lipids in the mRNA nanoparticle do the same thing, but some of the less common molecules make their way to the liver, great detoxer, where they get oxidized into more ordinary (disposable) molecules over a few weeks (this was explicitly studied in rats before they tried it on people).
Key takeaways:
1) So far as anyone knows, all the foreign material is gone after 2-3 weeks, so there's nothing left of it by the time your second shot comes around.
2) The mechanism of immune provocation is essentially identical in both cases, after all the hokey pokey all you're doing is tricking cells into expressing S protein, which the immune system takes as a heads-up that it should be on the lookout for this weirdo.
3) There will certainly be slight to modest differences in how strong the immune response is, because I don't think the modified S protein they use is exactly the same, and the variations in how you are tricking the cell into making S protein. There's a bunch of subtlety about how the body does this whole process that is still unknown, and could fling some interesting wrinkles in, e.g. how well does one method defends you against variants, et cetera. But none of these things should cause evil synergy between the two vaccines, so far as anyone knows -- just a question of how much good synergy there might be.
------
* It's actually the code for a slightly-modified form of the S protein, which is stable by itself, as the normal S protein needs to be embedded in the virus outer coat to be stable.
** Actually they use a modified mRNA, which contains slightly weird nucleotides so it resists degradation, as there is machinery in the cell that would otherwise chew it to pieces in no time flat.
Sorry, I realize something I said above was ambiguous: it's the J&J vaccinated cell that has its DNA messed up, but that doesn't matter because it's a dead cell walking. It's also why we don't care whether the cell is deranged in some way by having its ribosomes hacked to manufacture a billion copies of viral S protein. The cell is going to get killed anyway.
Everything immunologists have been saying publicly about immune theory suggests that being exposed to any form of the virus (different vaccines, or different strains of the virus) should boost your immune protection, if anything, a bit more than being exposed to that form when you've already been exposed to that form. The only study I am aware of to get empirical results about mixing vaccines used AstraZeneca and BioNTech/Pfizer (https://www.nature.com/articles/d41586-021-01359-3), and they seemed to suggest that the mix-and-match in either order was about as good as two doses of the mRNA vaccine in terms of antibody response, and better than two doses of the adenovirus vaccine, with slightly higher fever-type side effects. I don't think the study was big enough to determine whether the rare blood clot side effect of the adenovirus vaccine was made more or less likely.
If vaccine passports remain important in many cities for a long time, and if Texas continues to make it difficult to access electronic records of our vaccination, then I have been considering getting a dose of Johnson&Johnson on a visit to a coastal city in order to have a good electronic record of a vaccine, even though I already have two doses of Moderna and a paper card recording them. So I would also be interested to know if anyone who actually works on this stuff has further thoughts beyond what I've gathered from media discussion.
Who all remembers the SSC posts, "Legal systems different from ours, because I just made them up" and "Archipelago and Atomic Communitarianism"? Inspired by those, and by the various ACX posts on ZEDEs, I set up a subreddit (r/archipelago)! Right now it only has a handful of seed posts by me intended to give a sense of the purpose of the subreddit, but I hope others contribute and it becomes a repository of many such ideas, ranging from the practical like ZEDEs to mere fictional amusements like the legal systems post.
Why? Some of us write such things for fun. Some of us write such things to explore the range of the politically possible, in hopes of developing ideas that will influence the future. I think it's potentially valuable for both purposes to have a publicly accessible collection of designs for alternative political systems. So if you agree, come on by https://www.reddit.com/r/archipelago/ and share your ideas!
Oh wow - I had been seeing this headline when I glanced at the news the past few days, but hadn't actually clicked, so I didn't realize the information it contained was the result of a new transparency law! I had assumed it was just the NYTimes compiling numbers that insurance agencies had already been able to see for years. Since it appears this is newly public information, that could well shake a lot of things up!
(I noticed in an article about inflation the other day that health insurance was apparently one of the few expenses whose cost had been going up dramatically before the pandemic, but started actually going down during the pandemic - I wonder if this transparency law is somehow related.)
My guess was going to be all the elective surgeries that got postponed because people didn't want to go anywhere near a hospital if they didn't have to.
I would be surprised if a reduction in spending for one year would have propagated through to prices that quickly, but I know little enough about insurance pricing that I can't rule it out. It seems more plausible to me that changes in negotiated prices they pay hospitals would change their plans, and thus prices they charge for coverage, but saying it out loud like this does make me skeptical of this too.
New Parents / Expecting Parents discussion anyone? It seems like we've had a few in this group based on some past points.
Do you guys think there are certain innate differences in how babies respond to fathers vs. mothers? We have a 3 month old, and often she gets so tired that she starts getting more agitated and wound up, and as much as I try troubleshooting (diaper, food, burp, these are easy), I can't calm her down and get her past that high agitation state when she needs to sleep. My wife is very successful at it, even if it takes a while. She's had a lot more practice though, since she was off 12 weeks, and now she's left work.
Also daycare is such an incredible challenge, frustration and expense that we are now a one income household. With our baby struggling to gain weight and around the 6th percentile in weight at 3 months old, we couldn't stick it out with our daycare. They would barely try to feed her or deal with her little mannerisms that seem like she doesn't want to eat.
We live in a small city, but the wait time to sign up and expense of daycare is still quite bad, I can't imagine how it would be in a big city.
My experience (5 children) is that children respond differently to each parent, from the moment of birth, and it will vary all through their lives. There are times when you'll be the favorite parent, and other times when your wife will be, and I don't think you can predict it. It probably won't even have anything to do with what you or she do, so I wouldn't try to analyze it too far or try to change what you do -- it's probably just what's going on inside their tiny little heads, how all the neurons are jostling around, making and breaking connections.
It helps to cultivate a state of Zen acceptance about this stuff. Do your best, but don't let the occasional weirdness stress you out or make you doubt yourself*. And don't expect things to be picturebook perfect either. Children are little people, and they have all the quirks and individual strengths and weaknesses of people in general. Furthermore, probably only half the outcome, now or later, is dependent on what you do. A lot of it depends on their nature, and on what they do -- and even at a very young age, children start to make decisions on their own, at least a little independently of what you do.**
------------------
* Consider it training/prep for the high-school years.
** Which is ultimately a good thing, it's how children avoid acquiring some of the worst habits of their parents.
I was unfortunate(fortunate?) enough to have step kids who were past the small child stage and into high school before my tiny was born. I did not fare well with the very small stage (under 1 year) but am doing much better at the 3yo stage. The whole "I need IO that I can parse" is a very big part of what makes parenting possible for me.
Yes, I think most parents have the experience of being "better" at certain ages. I certainly feel that way. There are certain ages I feel I can handle very well, and others where I feel more than ordinarily stupid. I should probably ask my oldest, who are old enough to have a sense of humor about it, whether they see the same thing (about me). It would be interesting if they did, and it would be also interesting if they didn't.
When our (now two year old) daughter was little, my husband stayed at home with her while I went to work. It was still much easier for me to calm her than him, but I suspect that was breast feeding related. He would have to carry her around for a long period of time before she would settle, whereas I could just nurse her, and it was a much different experience than bottles. It was also much easier for me to put her in a soft wrap and walk around with her than him -- I think something about having already adapted to having a baby in that location? Which she liked a lot.
I've got another coming, and will see if it's different or not.
My daughter reacts differently to my wife and I. My wife is better at calming her down for bed or to take a bottle if she's really worked up. That said, I'm not sure the difference is significant. We both do it reasonably okay.
My learning from the last six months has been: patience. Often she's crying her loudest and seems most inconsolable moments before she's asleep. I just grit my teeth and keep rocking her while continuously reminding myself of that fact.
We also (early, 1-3 months) did this thing that we just started calling "yo-yoing". She'd be inconsolable and then seemingly dose and then start screaming a few moments later and then dose again. Initially I would get super frustrated when she'd wake up each time, having just thought I was finally in the clear. But after a few attempts I noticed the pattern: each scream was shorter and each dose was longer until she was finally asleep. So I just mentally prepared myself for a half-hour to an hour of yo-yoing and suddenly it didn't seem so bad: each period felt like progress.
All of this was psychological and mostly just learning to be patient (I'm not). Shortly after she was born, I heard this great Sam Harris episode on "framing any given moment as your potential last time doing X". Using this mental framing helped me to wait out those long bouts of crying. Sure enough, I've already put her down in her bassinet for the last time because she just upgraded to her crib. I already wish I had appreciated it more the last time I did it.
All the best! Holy hell is it a lot of work, but I love being a Dad.
Do kids react differently to different parents? I haven't seen a child who doesn't. My daughter didn't want anything to do with me for the first two years of her life. When my life left the house, she would scream from the time the door shut to the time my wife came back. My son was totally different. All he wanted as an infant was to take naps on my stomach while I took a nap in the recliner. They both grew out of it after a while.
In twenty years, you'll likely be glad your traded income for more kid-parent time. It sucks now, but take solice in your future self's preferences!
This was also something I noticed with our kids while they were little. For the most part I attributed this to the fact that, like your wife, I stayed at home with the kids which I think has two effects.
First, on the baby's side, I think there's just an innate recognition of Parent Who Is Always There, the effects of which are more pronounced when the baby is upset and/or sleepy, like the situation you described. With our first, it was basically impossible for my husband to calm her down or put her to sleep for the first year.
On the parent's side, I found that because I was with the kid all the time, I just really became attuned to every little thing that worked/didn't work. There were just little quirky things that I figured out worked for the baby that, when I tried to give suggestions to my husband, just made me sound like an insane person. ("She likes being held upright at first, but then you've got to shift to a cross-body cradle position once she gets drowsy enough. When is she drowsy enough? Oh, I have no idea how to tell you that. It's just something I can tell when it happens.... Also, once she starts closing her eyes, you've got to bounce every few steps.")
I also realized once my kids had gotten older that their difference in response to my husband also largely tracked with how their personalities developed. My oldest, who gave my husband fits, is now a very independent kid who is loving and affectionate, but a bit standoff-ish. She doesn't like being hugged too tight and isn't really "cuddly" by nature. Our second, who tolerated my husband much more than the first is now a very demonstratively affectionate kid who gives hugs and kisses left and right and still loves to be held and cuddled.
Kid #3 is still a baby, but he absolutely loves dad and it's very rare that my husband can't handle him when he's fussy and/or needs to go to sleep. Beyond what I mentioned above, I also attribute this partly to the fact that after a couple kids you're not nearly as neurotic about things and babies can tell when you're stressed out or relaxed.
Just hang in there! All of this is inherently stressful, and when you slap on weight gain issues as well, everything gets dialed up to 11. Our oldest was very low birth weight because of medical issues on my end (less than 1 percentile!) and had nursing/feeding issues because she was so small. It was a very rough first couple months. If it makes you feel better, she didn't get out of the 5th percentile till she was 9 months old! Once we introduced solid food she jumped from the 8th to the 17th to the 30th within 3 months. Before that, we went from 4th (2mo) to 3rd (4mo) to 0.9th! (6mo).
Very encouraging on the stories of different kids being different. Yes, we are still in that neurotic stage. Also, encouraging to here how your oldest recovered.
My wife and I are both very short. I'm near the 1st percentile in height for men, so I guess it shouldn't come as a surprise that our baby would struggle with that, but we joke that she's going to be tall and skinny because she's jumped up in her length much more quickly than weight.
Same age kid here. I think there are innate differences with response to mom/dad, but they probably aren't as big as you think they are.
I plan to hire a nanny come December (right now, baby is looked after by my husband and Mother in law during the day). My main concerns with daycare are illness + lack of individualized care: not enough people to comfort him right away when he's upset or needs help going to sleep. I would be much more open to daycare once my kid is physically stronger and able to socialize with other kids.
With illness, are you specifically looking to trade "illness now" for "illness later"? My understanding was that they will be exposed to about the same amount of different diseases over the course of a lifetime anyway, so with most childhood illnesses you only get to pick when, not whether, they happen. (Although there's definitely a few that are particularly common in small children, e.g. ear infections and croup are a lot more of a young child problem due to young child anatomy, so you do get to just skip them.)
The other thing yo be prepared for is that with a kid in child care, you will get sick a lot more often too. I went from getting a cold every few years to getting a few colds every year.
Of course now you've got to worry about covid as well.
Yep. I think it's better to postpone sicknesses for when he's older, because he's less likely to have severe complications and can emotionally cope better with being sick.
Makes sense -- my son's first cold (~3 months old) sucked, and when he was ~5 months and a visiting ~9-month-old passed off his cold to us, I remember thinking "I wish we were at the point where you can be that cavalier about a runny nose..."
Don't underestimate those 12 weeks your wife spent with the baby. It may be a matter of time and patience, and you may need to spend an equal amount of time soothing the baby so that she is just as comfortable with you as with her mom. Calming an agitated baby can be extremely stressful, and knowing that your partner can just take care of it will be a strong temptation. How you move forward will depend on you and your wife's tolerance for taking the time and arranging your schedules. My wife took on that responsibility in our household, and the kids were more easily soothed by her most of the time. I was still involved enough to be able to do it fairly well, so that I could step in when really needed (aka my wife on edge and needing a break). Our middle child got really bad gas around 11pm almost every night for months, and would be up crying non-stop until it passed. My wife and I would take turns trying to sleep and carrying around the baby. Oddly enough that's a good memory now, despite being a really hard experience at the time.
Yeah I agree she's had more time and that's a big deal. I tend to walk around with and burp our baby more often, but sooth her less. She likes being over my shoulder and she's got really good at keeping her head straight. I have to remind myself still sometimes that she needs head support because usually she doesn't pretty good.
I know this memories will be fond ones, and more I can share the less overwhelming it will be for my wife. Our baby's at the stage where she's doing a lot with her hands now, she whipped the glasses off my head yesterday.
Children vary a lot in how they respond to parents. My oldest was fine with either my wife or myself, and middling decent with other caregivers, but our youngest only wanted mom. With the younger I could, after a lot of time and struggle, get her to take a bottle or go to sleep for a nap but most people (grandparents, sitters) couldn't. We were super lucky to find a day care with a very experienced infant caregiver who could get her to take a bottle or nap but I'm not sure what we would have done if we hadn't (my wife probably would have had to stay home like yours).
I would mostly just advise you to cultivate extreme patience on the issue: after trying the usual suspects you mentioned just keep holding and rocking. Keep the head elevated as you do so to help with reflux. It may take you longer than your wife, and you may never be as good at it, and just remember that it may have more to do with the child's preferences than anything you're doing or not doing.
Some good tips. Good to hear some positive experiences of daycare. We got on the list at several places as soon as we knew for sure and only got a response a few months before my wife was supposed to go back to work.
I'm sorry your daycare didn't work out, for us it improved the parenting experience so much! (Mostly just because we both prefer the company of adults and computers to that of babies, but also, they somehow taught our kid to enjoy tummy time.) We didn't have to wait to sign up, but we were able to shop around (bigger city advantage?) and find a place we liked that was able to take us. We were doing this about 3 months before the kid, so about 6 moths before starting the daycare. In a big city on a coast, I hear you may need to start even earlier.
No idea on mothers vs. fathers, I don't recall a major difference with ours; some of the time the favorite parent is clearly the dog.
Good luck getting out of 6th percentile! We went through that when the kid was 1 month and discovered he just really preferred bottles to breast, which of course then made daycare much easier. (He went from 6th to 96th to just being a large kid.) We did have to experiment a little with bottle types, there was one he liked even less than the breast.
Yeah my wife didn't think she could do a good job with breast, so we did formula right from the start, but we've changed formula and now we are on the Nutramigen (expensive) stuff, and we've found an anti-colic bottle that seems to help her too.
Does anyone know from where the (Christian) position that after death you will get to know everything (from the meaning of life to who ate your sandwich one day) comes from, or starts at? Is it there in the Old Testament or is it some human universal or what? Sorry, I know nothing about religions, I grew up in a mainly atheist society built on a previously Christian one, but all through my life I've vaguely noticed claims that if you reach Paradise you'll Know It All.
There's a strong argument that the old Jewish and early Christian understanding of the afterlife is not the same as the modern Christian one. Prof. Bart Ehrman has written a book on this. Here's an interview about it: https://text.npr.org/824479587
A version of this idea is already there in Plato. In the Meno, Socrates argues that since an uneducated boy is able to learn that the diagonal of a square is the side of the square with double area, merely by answering questions Socrates asks and without being told anything, therefore the knowledge must already have been present and just needed to be recalled. He generalizes this to suggest that when souls are disembodied (before birth and after death) they are in contact with the forms and therefore have all knowledge, and in life, what we think of as learning is mainly recollection.
This idea sounds similar enough to what you're describing as a Christian view, and I know many early Christian views were heavily influenced by neo-Platonism, that perhaps this is a relevant influence.
In the Meno, Plato's Socrates does indeed advance the theory of learning as recollection - but he doesn't present it as his own theory. Before introducing it, he says that he learned of it from "priests and priestesses" (ἱερεῶν and ἱερειῶν). This suggests that while Plato might be the first *extant* Western source, he is likely not the origin point for the idea, and that it has a deeper and earlier foundation in Western religious tradition.
Also, it's anybody's guess whether Plato actually meant to suggest that Socrates actually believed the theory. By the end of the dialogue, consideration of the theory hasn't helped them answer the core question of the dialogue - whether virtue can be taught. I think Socrates is stepping Meno through a number of hypotheses about knowledge, all of which he thinks are wrong, intending to make Meno dissatisfied with all of them; the theory of recollection is introduced primarily to lever Meno towards conceiving of knowledge as something that must be supernatural. The answer Socrates actually believes himself would (of course) be the Forms, as detailed in the Republic or the Phaedo. But that's just my own personally favored interpretation, and I think most people disagree with me and think Socrates actually endorsed the theory of recollection, so YMMV.
Weird. A friend of mine, AFAIK well versed in theology, once told me that (at least according to some) once a sin is confessed and absolved it is effectively "erased", so that in the afterlife nobody knows who ate your sandwich at all.
But of course I can't find anything to back that up right now.
In Dante's Inferno the spirits of the damned can only foresee what will happen in the distant future; their clairvoyance vanishes the more the events are near in time. They know nothing about what happens in the present.
"Most believe that what he says applies to all the damned, e.g., Singleton in his commentary to this passage (Inf. X.100-105). On the other hand, for the view that this condition pertains only to the heretics see Cassell (Dante's Fearful Art of Justice [Toronto: University of Toronto Press, 1984]), pp. 29-31. But see Alberigo, his soul already in hell yet not knowing how or what its body is now doing in the world above (Inf. XXXIII.122-123)."
The thing that we call "knowledge" on earth is completely different from how things are in heaven. To the point that Paul (the writer of 1 Corinthians) isn't even willing to call what happens in heaven "knowledge" without a bunch of qualification. Earthly knowledge is full of complexity and distortion and misunderstanding, like looking at an object in a messed-up mirror. Heavenly knowledge is like looking straight at the object. Another example he gives is that earthly knowledge is like the incoherent babbling of a baby, and heavenly knowledge is like straightforward adult speech.
That makes it very hard for theologians and pastors to teach about what heaven is like, when one of Christianity's main sources basically says that earthly human thought is defective and is going to be replaced by something better. "The first thing I have to teach you about this is that anything I teach about it is definitely wrong."
The tradition I was raised in would probably say if you wanted to know it all you could, but when you get to heaven you aren’t concerned with earthly things so you probably wouldn’t try to find out.
Book of Revalations is probably the number one source for the eternal afterlife with God. My understanding though is that you do not instantly know everything; rather you have an infinite amount of time, and access to God, who does know everything.
Also, you apparently get to live in a Pluto-sized arcology constructed in space after Earth is destroyed, following a war in which trillions or quadrillions of people are killed by bombing (possibly orbital bombardment).
Or at least, that's the literalist way of reading Revelation.
Super interesting question. I'm an ex-orthodox jew but the same question could apply. My understanding is that the Old Testament doesn't mention the afterlife at all and its clearly a newer invention of monotheistic religions. I think the afterlife is sufficiently abstract and unsupported enough that every commentator could add their own flourishes all the time on what you get once you die. If I had to guess, I'd say that as religious commentators became more philosophical to match the philosophies of their time period's they got really into the idea that the afterlife meant "none of the limitations of the physical world". So not being held back by "time" means experiencing past, present and future simultaneously. Not having eyes means "you can see everything". etc. etc. This is obviously more of a guess answer and I'm sure their are actual experts here who can be a lot more specific.
I love this guy and I'm sure he'll have the best solution:
The Hebrew bible does mention an afterlife, but very scantily. Daniel ch. 12, for example. However, the rabbinic speculation about the afterlife is rife. And one rabbinic comment (influenced apparently by the Socratic assumption) is that you know all of Torah before you are born and then an angel presses you above the lip (the philtrum- that little indentation above your lip) and you forget it.
A nice anecdote about that: as Emerson was dying he was losing his memory and Bronson Alcott told him we forget things of this world as we forget things of the other in the transition.
Casually, I’ve heard that “when you die, you go to god and are with god, and god is all knowing, and you talk to god, so you can ask god stuff, and it’s heaven so you have all good stuff.” For a history of heaven a brief overview is https://en.wikipedia.org/wiki/Heaven#Hebrew_Bible. Different Christian cultures and denominations and times and peoples develop the idea of afterlife and heaven and such in complex and different ways. There’s a lot of thinking and folk ideas and variation and differences there. I don’t know much about it
I have some doubts about the plan to throw so much money at the "global climate problem". Please explain to me where I go wrong or with which points you disagree the most:
1. So far, the world is ca 1 degree C warmer compared to the preindustrial era (leaving aside doubts about the precision of estimating preindustrial era temperatures). So far, nothing terrible is happening, extreme weather events are not more devastating. The damages as portion of GDP have plummeted and the number of "extreme weather deaths" too, despite the much bigger world population. I sort of don't expect following strong non-linearity to be true: compared to the preindustrial past, one degree up, nothing happens (or things even improve), another 1 or 2 degrees up, all hell breaks loose. Is there some magic number saying "this is the exactly correct global temperature"?
2. Some countries seem to actually benefit from modest, say 2 C, temperature increase. Canada, Sweden, Russia etc. There are surely some losers, but the net balance may even still be positive.
3. Even the IPCC, which overall sounds quite alarmist to me, estimates the GDP global loss from "untackled" global warming only at couple percentage points by 2100 (when the global GDP is expected to be 4 or many more times higher than today).
4. CO2 seems to be fairly beneficial in some ways: agricultural yields are estimated to be 15 percent higher compared to a situation with pre-industrial concentrations. It is the main plant food: some deserts are already shrinking, the planet is greening. Surely it is a good thing?
5. Global warming seems to be saving lives even now: the drop in deaths from cold is several times higher than the increase in deaths from heat. No one seems to care for this, though.
6. The green deals seem to be extremely expensive, with lot of the funds going as subsidies to inefficient technologies (solar, wind). If you argue that these technologies are already efficient, please explain why they need subsidies. This and new taxes have the potential to choke economic growth. I believe that my grandchildren would be better off in a world that is 4 times richer and 2 - 3 C warmer than in a world that is only 1 C warmer but economically stagnating. After all, rich Gulf countries have no problem operating in 50 C outside temperatures. And we are still talking about a difference of only 2 C, not going from 20 to 50 C. The necessary adjustment seems very minor to me, with plenty of time to adjust.
7. If things really seem to go bad, there are several geoengineering projects that might work. And these would be still two or more orders of magnitude cheaper than those green deals.
8. Many alarmist messages go hand in hand with subliminal or open "we have to end capitalism" or "our only chance is degrowth". Somehow I fear that for many people this might be the real agenda.
9. China, India etc still keep building coal power plants at a fast rate. What if they (as I expect) will not play along? Should the West economically suffer and let them outgrow us (when they emit the CO2 we save, undermining all the green deals of the world)? Will a much stronger China be kind to us?
> the plan to throw so much money at the "global climate problem".
Hol' up a minute. Other respondents have responded to your numbered claims, but what about this?
The plan? There's a plan? A plan to actually do something? There's "the" plan to do something? Where is it, and can I read it? What are the numbers?
I thought I was pretty well up with this stuff, but all I see is politicians trying to draw attention to themselves, without actually *doing* anything that would antagonise their donors.I haven't seen anything that might be described as a plan, let alone co-ordinated action.
For example two weeks ago the UK's minister for the enviroment said that the upcoming COP-26 talks in November were "the last chance to avoid catastrophe", and in almost the next sentence he announced the issue of new oil and gas exploration permits in the UK section of the North Sea. No sign of a plan there.
Regarding the temperature, 1 and 2, the problem is feedback loops. For example, snow is shiny, reflects a lot of light back into space. If the ice caps melt, the albedo drops, and then earth absorbs more energy directly. Another dangerous feedback loop is the permafrost: a bunch of frozen plant matter in the soil in Northern Canada and Russia that's been frozen for hundreds of thousands of years. If it thaws (which it allegedly will at +2C), all that dead plant matter will decompose into GHG, and the amount of mass we're talking about here is greater than the amount of GHG we've emitted so far. These nonlinear processes make specific predictions hard, but they point very strongly to "bad."
Re 7, I agree, but we still need to fund these projects, and that requires believing there's a problem in the first place.
Re 8, I suspect it's true, there are people who don't REALLY worry about climate change except as an auxiliary argument for their pet issue, be it socialism or some spiritual environmentalism. It's important, though, not to let wrong people cause you to reflexively adopt a different wrong position (like "we don't have evidence that climate change will be bad."). Some people are dumb and have bad arguments, but do you really want to shape your worldviews based on the bad arguments dumb people have?
I also have the impression that the bad effects are overplayed and the good effect underplayed.
But on the other hand, there are some effects that deserve to be called catastrophic, and that can not easily be solved by throwing money at them:
- The oceans become more acidic (carbonic acid). This will kill a lot of species, plain and simple. Even for species not affected directly, there might be secondary effects. The marine ecosystem might look really different from today if the earth is 4 or 6 degree warmer.
- The same might hold for land-based species, but here it's less clear what happens. Ecosystems might or might not collapse.
- Our civilization has thrown a lot of money and effort into optimizing infrastructure for the environmental conditions that we have now. Like, the population density is highest close to the coastlines, and the most valuable buildings are there. If the sea level really rises a lot, this means a lot of people will be displaced, and a lot of value lost. If some 100 million people from Bangladesh have to move, that is a problem that can threaten stability. If some parts of California just turn desert, this is a problem. The answer "California is just a place, we can move elsewhere" is not completely false, but has lots of ramification.
A problem is that the changes have a momentum of centuries. If we want to avoid changes, then it is not an option to wait and see whether they are good or not, and re-engineer climate if it becomes unpleasant. Once the Greenland ice shield becomes unstable, it's gone, and re-reducing the temperature will probably not change that. A lot of other things are like that. Once the jetstreams or the gulf stream are gone, it's very unclear whether they come back in cooler climate.
So it comes down to a gamble. I don't think that we know it's going to be desastrous. But the chances for desaster are non-vanishing enough that I would like to avoid them, even at high costs now.
Fair enough, surely one valid point of view. Just a note towards the acidification: oceans are alkaline, this "acidification" is actually only a tiny move towards neutral pH. People tend to imagine a sea of acid, a terrifying thought for sure.
I still agree that this change might be lethal to some organisms even so.
Yes, it's not going to be harmful to humans. But afaik, even the current changes are already problematic for animals which use calcium carbonate, which is quite common in the sea. (Sea shells, algae, sponges, ....)
On 6, governments subsidise a bunch of things for a bunch of reasons. In particular, fossil fuel subsidies are much greater than subsidies for renewables.
Solar and wind are extremely cheap and effective. (I saw one discussion that it costs less to build and operate a wind farm than it does just to operate an existing coal power plant of the same energy output, but I can't find it right now.) But it's hard for them to compete against the level of lobbying, inertia, and financial support of coal.
But even if they weren't competing with a subsidised fossil fuel market, there's an argument for subsidising them to make the transition happen sooner. (Obviously this only applies if you accept that global heating is bad, which most experts do.)
I just took a look at the OCI Shift the Subsidies database which is allegedly the underlying data behind the linked article. I’m super unimpressed with this data source. Most oil companies globally are nationalized so, yea, they use their status as a state owned entity to get preferential credit. That doesn’t mean those projects wouldn’t get financed otherwise. (also, financing isn’t a subsidy). In fact, I suspect more projects might be financed given how inefficient state owned enterprises tend to be. These people have a huge agenda that taints the entirety of their data collection.
The Oil and gas industry is both immense and profitable. Probably because oil and gas gets used in almost everything we do and touch. You can critique the industry for all sorts of things but saying it wouldn’t be profitable without government subsidies is completely nuts. In the US, the government gets a share of the mineral right so it probably looks more like a net tax than a subsidy.
Coal plants are a hassle to operate. Nobody wants to buy them in the US right now. But they’re also way more effective at providing base power which matters a lot, like at night. They don’t need lobbyists to ensure their existence. Renewables need a base power value proposition and coal will be done.
As you can probably tell, I’m not very tolerant of the excessive excuse making by the renewable crowd. Just about everyone has bent over backwards to make wind and solar a thing and it’s time to either let them compete unfettered in the energy markets or move on to other technologies.
I never said fossil fuels need subsidies to be profitable (if the site I linked to said that then I apologise for not reading it thoroughly). But in general if you've got two competing products that would each be profitable on their own, then the amount of subsidies they get still has an impact on the relative market share.
I kind of take your point about the data gatherers having an agenda, but also, is there anybody without an agenda who's done this kind of analysis? If I'm right, and global heating is a big bad thing, and governments are doing too little too late, then anybody who looks at the data will come to believe that if they didn't already. When they present their data they will have a strong opinion and you'll be able to accuse them of having an agenda.
I agree that you need base power. I'm pro nuclear for that reason. Also putting batteries in houses.
I'm sure it won't surprise you to learn that I don't really think "the excessive excuse making by the renewable crowd" is at all fair, and the idea that anybody has bent over backwards... I'd characterise that as a few token drops in the ocean so that governments can look like they're doing something. But I'm not sure we'll get very far by trading perceived hyperboles.
I'm sure there are a bunch of places where wind and solar are the best choice for the majority of electricity generation. But those technologies are already mature and competitive, so they have tons of investors and don't need extra help. The nuclear industry, on the other hand, was almost completely wiped out around 1980. It doesn't just need to be rebuilt, it needs new technology to survive in the modern regulatory environment. So on the margin, I'd put all my money on R&D for Generation IV nuclear, especially molten salt reactors.
I don't think I can answer that. First of all it depends on the country how important each is. Second, I think they need different things. Wind and nuclear suffer from NIMBYism. Regulations need to be eased to make it easier to build on-shore wind farms and nuclear power plants. One problem in the UK is that the government has a lot of MPs in rural areas with gentle rolling hills and conservative citizens who don't want their view to be changed. So you don't necessarily need to spend money on subsidising wind farms in those places, but you do need to have a government that's prepared to risk throwing some of its MPs under the bus for the sake of the country (and planet) as a whole. Of course their incentives make that basically impossible.
For solar, I want to see new build houses having more than a token 3m^2 of solar panels built into their roof. The government doesn't need to spend any money at all here: just impose regulations on house building. (I recognise this will increase the cost of that housing but I think it's worth it.)
I also think nuclear fusion is important in the long term. But I don't know enough about the state of the research to know if it needs any help at the moment.
I have many more thoughts on the precise ways to encourage those three areas, but I'll spare you the full version. But I hope I've explained why I don't want to talk about spending hypothetical points.
> I sort of don't expect following strong non-linearity to be true: compared to the preindustrial past, one degree up, nothing happens (or things even improve), another 1 or 2 degrees up, all hell breaks loose.
Wow. What if I told you that a glance at a phase diagram would demonstrate that thermodynamics is really *all* about nonlinear effects? If you're not convinced, maybe try an experiment: vary the temperature of an ice cube linearly and try to see if you notice any nonlinear effects.
Fair point, but still: I believe that there are some non-linear effect, and possibly even positive feedbacks (although nature is rather full of negative ones). Nevertheless, NOTHING bad happened so far (within some statistical noise). Why should we be suddenly at a tripping point? Such that ALL HELL IS GOING TO BEAK LOOSE AND WE ALL BURN TO DEATH (paraphrasing the geist of most media coverage that come to my attention).
Is there anything special about 15 C global average so that say 17 C is terribly dangerous?
Aside from the mass extinction (which isn't really a problem for humanity), the big issue is the amount of stuff near sea level and the amount of shift in crop-growing regions. It's not that the state of the world is better at 15C than at 18C, but that the optimal distribution of stuff in an 18C world is significantly different from that in a 15C world and "location of cities/farms" is difficult to change quickly without mass deaths (Siberia is going to become a lot more habitable and Bangladesh a lot less so, for instance, but "relocate all the Bangladeshis to Siberia" involves substantial retraining of farmers, construction of infrastructure and simple transportation difficulties).
WRT sea levels it should be noted that there are engineering margins of error basically everywhere; when those margins of error are exceeded, well, you're going to go from "not a big problem unless you get hit by a hurricane" to "a bunch of CBDs are underwater" fairly quickly. It should also be noted that sea levels are very slow to react - we've got +1C, but we haven't got the full +1C worth of sea level rise yet.
You're definitely right, though, that human extinction isn't going to happen from this. Even in the worst-case scenarios you're looking at ~1-2 gigadeaths; runaway greenhouse ("Earth turns into Venus 2.0, everything dies") has pretty much been ruled out (as in, "it's possible with sufficient amounts of GHG, but there aren't enough fossil fuels on the planet to make that much CO2, and the super-effective GHG like CFCs have been successfully phased out").
> "relocate all the Bangladeshis to Siberia" involves substantial retraining of farmers, construction of infrastructure and ...
... and Russia deciding it will issue millions of visas to Bangledeshis. But this is unlikely, as is Mexico/US/Brazil granting visas to Central Americans, etc. So anti-immigrant sentiment is one major reason I'm concerned about global warming.
magic9mushroom left a suicide note in OT203 on Dec. 26. So... bye again.
I'd say "Nothing bad has happened so far" is also an exaggeration.
Consider the level of drought and food insecurity we are seeing now, vs. 20 years ago, even with modern technology.
I'm involved peripherally with agrobuinsess personally, and shit is fucking precarious for lots of high value crops already. If rain becomes any more unreliable, we could see some staple crops in becoming likewise unreliable (in certain areas. EG, there is less snowpack on the sierra's year over year for 25 years, and if the trend continues it will be BAD for the worlds largest class 1 farming zone.)
Look at GSCI and BCOM. The market trades within a range and generally downward.
If you feel the future will be different, put on the trade. But you can’t cherry-pick a particular crop and use that to support some broader climate agricultural catastrophe that isn’t happening.
It feels to me like there's an intuitive disconnect between "1 degree is a small amount of temperature shift" (true in any of my day-to-day experiences) vs. "1 degree is a HUGE amount of temperature shift for a planet-wise average". In particular, when you say "another 1 or 2 degrees up, all hell breaks loose," you seem to be treating 1 degree and 2 degrees as similar amounts of temperature shift, whereas I think for a planetwise average they're really not.i
Another thing is that there's much less warming on water than on land, and much less warming near the equator than at medium and high latitudes, so each 1 °C increase in the global average is a rather larger increase on temperate land.
But why? Even the "planetary average" I believe is an ill defined concept. No single place on Earth cares too much about planetary average. It cares about the specific temperature it experiences. I have myself seen a change of - 44 degrees in 24 hours (Czechoslovakia 1979). We had some "coal holidays" because of that, the coal froze and was hard to unload from the freight trains. So heat and electricity was rationed. Lot of fun back then. Local plants and animals are still well adjusted to such wild swings, why would they suffer so much in say 2 C per century drift? Some things will change for sure, but some things will change in all other scenarios too.
I think e.g. rainfall is driven by much larger-scale patterns than what you would normally experience. If you held everything else constant but just raised the temperature 1 degree C I think many places would be ok (unless that 1 degree C went from -0.5 to +0.5 and now solid ice becomes liquid water), but I don't think everything else is constant, fundamentally I assume because somewhere on earth lots of solid ice is turning into liquid water.
After all, my understanding is that the last ice age was only 4-5 degrees lower than current average; 4-5 degrees lower than current also doesn't feel like that big a deal (I live in a place with >30C summers, subtracting 4-5 degrees doesn't exactly cause water to freeze), but manifestly was.
Point taken. On the other hand, rainfall patterns change all the time, adaptation might be much easier and cheaper. Because of more evaporation there would be probably more rain overall, which is probably mostly a good thing. Speaking of ice ages, well, there is a problem we should be eventually much more worried about imho. Maybe we actually should save the carbon, to use it for preventing the next ice age in the future.
The optimal temperature is probably whatever temperature human civilization has developed at, since temperature change results in large changes needed. Also, on a personal note, I suspect the cost of a species going extinct might be enormous, since biotechnology has barely begun, and every time a species goes extinct, we lose a potential set of biotechnological tools.
3. The above study estimates global GDP might be reduced by around 23% by 2100. The IPCC probably wouldn't make a super specific estimate like that, since there is huge political pressure on them to be as conservative as possible. However, even though that may represent the total GDP loss - there will be many who are far worse off. People in hot, dry areas may be forced to seek asylum elsewhere.
Also, the risk of triggering positive feedback loops would be a pretty major cost [IPCC AR6 WG1, SPM section C3, p37]
Solar and wind are now cheaper than fossil fuels (https://doi.org/10.25919/16j7-fc07) however our electricity infrastructure hasn't been built for variable, distributed generation. We currently lack energy storage on our electricity grids to "firm" the supply. However, we can simply build large batteries - lithium batteries will probably get a lot cheaper very soon - this still will require public investment, but we can't afford more climate change. Or we could just use nuclear. Or we could just tax carbon like the vast majority of economists want to (https://www.econstatement.org/)
7. What projects are you thinking of, and what are their costs? Geoengineering technologies for climate change are all in their very early stages - have you considered the risk that they won't work?
8. It could turn out that those people are right, couldn't it? What do you think?
9. China's net zero commitment is super vague and the CCP has delayed climate action before (https://www.theguardian.com/environment/2009/dec/22/copenhagen-climate-change-mark-lynas). However, it's nearly finished implementing an emissions trading scheme, putting it a few steps ahead of America, Australia et al. But, in any case, climate change is a global problem - no one country can stop it. That's why international agreements such as Paris are so important for us to stick to, and not feel emboldened by laggard countries to be laggard ourselves.
8: It is of course possible that anti-capitalism would be the best system in certain contexts, at least (e.g. I'm sure that some form of fairly extreme socialism is optimal for *some* level of technological sophistication, population size, biome of government types around the world, etc., whereas capitalism would be optimal for others, almost no matter how you define "optimal"). But if that's the case, that should be argued on its own merits, rather than trying to make the (fairly weak and almost universally poorly argued, IMO) case that it is the ONLY solution to climate change.
Well, the heuristics is fairly straightforward: capitalism (market economy) works, nothing else does. Ever. Something else might hypothetically work in the future, when there is a total abundance of everything thanks to robots etc. My guess is though that without capitalism we will lose democracy - because the political class will be hundreds of times more powerful than today, deciding ALSO about the allocation of all the resources. Or some kind of AI, or a blend of the two. What could possibly go wrong, eh?
Just to pick a small part of this to demonstrate my frustration with these discussions:
"By 2100, 300-600 million people might be displaced by sea level rise alone"
Is that number based on humans doing nothing in reaction to rising sea level? Why would we do nothing?
I'm aware of the fact that Cape Town was able to reclaim several blocks of land where there used to be sea and this was decades ago. Hundreds of years ago the Netherlands performed miracles with water and land reclamation. It's just so hard for me to believe we won't be able to save New York / London etc with some engineering and human ingenuity. We're masters of using technology to adapt to the elements.
My main concerns remain: A. People not wanting to leave their air-conditioned homes and B. Loss of bio-diversity.
I think most cities that have "reclaimed" land from the sea usually took places where the sea was very shallow, added additional rock/soil/etc to raise it above sea level, and then built on it. It almost goes without saying that those locations were unbuilt before they added rock/soil/etc, because those locations were underwater.
I think the process of taking a place that was already build and raising it is much harder. The only example I am aware of is the Raising of Chicago in the mid 19th century: https://en.wikipedia.org/wiki/Raising_of_Chicago
I suspect that some cities might be able to do this with some of their relevant neighborhoods, but very few particular places are going to be worth investing this much effort in - the costs of this sort of raising would thus be even greater than the costs of displacement of the several hundred million people that live on land that will end up below sea level. (I suspect that New York, Miami, London, etc. will be worth investing in, but the hundreds of millions of people displaced are mostly in Bangladesh, Indonesia, and other low lying highly populated places.)
No, it's based on a few different projections of what emissions could end up being, from little action to high action. Obviously there are some big unknowns and uncertainties - this is always going to be the case for a topic like climate change, which encompasses basically everything on the planet. Read the study here https://www.nature.com/articles/s41467-019-12808-z
What proportion of the world has air-con (or would if they could afford it)? I live in Southern Europe and the first time I saw an air-con unit was when I visited America.
I hope to get back here when I have more time. Right now only this:
8. - as someone who spent half of his life in non-capitalist country (Czechoslovakia), I am absolutely sure, I mean absolutely, that you don't want to lose capitalism. It is the engine that makes all the money you need to eventually take care of the environment and everybody's needs in fact. When we got rid of the communist rule in my country, the environment improved incredibly, within few years. I even think that without capitalism you lose democracy (there was never any successful democratic country without capitalism - meaning market economy. Scandinavian countries have market economies, to be sure). As for degrowth - I believe that without growth you have a recipe for a social disaster. Everything becomes a zero sum game suddenly - good luck with that. From another angle, the more rich is a country, the easier it is for it to handle emergencies, disasters and problems in general. More growth means more wealth.
9. Chinese communists will do absolutely nothing to threaten their economic growth. It is a part of their social contract and power is all they seek. Also, they cannot be trusted, the communist assholes they are. I have seen this in and out, they can be forced to do something, at best. Do we have enough force?
I don't think your assumption regarding a link between capitalism and environmental improvement is correct. Between the 60s and the 90s, the US also vastly improved environmental conditions...got rid of acid rain, cleaned up water and air, etc., and that had nothing to do with becoming more capitalistic, it was because of better knowledge, technology, and regulations. China has improved too. I see no link whatsoever between using technology to implement environmental improvements and and whether that is funded by private or public ventures (indeed, there is often no profit motive there, so it usually has to be implemented by public investment or regulations).
He didn't say "capitalism is the only thing that can lead to environmental improvement." What he is referring to is the reputation Stalin and Mao's regimes had for astounding degrees of waste. When you remove the price mechanism for fuel, for waste disposal, for resources, none of the workers or middle-managers or project czars have any incentive for efficient use of resources. If they need more coal to build some bridge, they ask the party for more coal and it gets sent over. No one keeps track of how much resources they consume because no one has to pay for it.
We find, just empirically, that the price mechanism keeps everybody more on their toes about resource consumption.
From Poland. The problem with communism was that massive part of economy was completely dysfunctional, interest of anyone utterly detached from any sort of productivity.
So you got the worst parts of capitalism with no benefits.
> better technology
made available in Poland thanks to collapse of communism, the other two also but this was clearest one
> The Soviet whalers, Berzin wrote, had been sent forth to kill whales for little reason other than to say they had killed them. They were motivated by an obligation to satisfy obscure line items in the five-year plans that drove the Soviet economy, which had been set with little regard for the Soviet Union’s actual demand for whale products.
China economic systems is about as communist as I am chinese
> I see no link whatsoever between using technology to implement environmental improvements and and whether that is funded by private or public ventures
the problem is not "public" part, it is "dysfunctional as fuck" part (guaranteed in centrally planned economies that are detached from actually doing something useful)
8. In general, I agree that capitalist societies have done a lot better. I certainly don't forsee this trend continuing once AI becomes capable of replacing humans at nearly everything. But there is a major flaw in capitalism if the costs of climate change remain external to the market.
9. Although China certainly burns a lot of coal, decarbonization is also a major opportunity for them - they make a large proportion of solar panels and batteries. My feeling is that the best approach to tackling the CCP is to continue raising tariffs and slowing international trade with China, until their government agrees to democratize and respect human rights.
Just to expand on your point about richer countries handling disasters better, it often illuminating to ask the climate-alarmed which climatic extreme kills the most Americans and what is that death rate? You know, so we can compare it to the 30 odd thousand who are shot to death or the similar number who perish in road accidents.
Its floods which are the most deadly of all the climate disasters - about one hundred Americans lose their lives to flooding every year.
Such a relief that AR6 doesn't suggest that flooding is getting worse (despite what some headlines might misleadingly hint at)
"At 1.5°C global warming, heavy precipitation and associated flooding are projected to intensify and be more frequent in most regions in Africa and Asia (high confidence), North America (medium to high confidence) and Europe (medium confidence)"
So, if flooding does (eventually) increase by 10% might we expect an extra 10 Americans to die every year? No; because deaths from all climate phenomena have been plummeting, everywhere, for as long as there have been records. Extreme weather was a vastly greater problem in the past than it is now, and will be in the future.
That is what irritates me most: every weather event is "because of AGW", according to most media. As if these things did not happen before. It is ok to use statistics to prove the point, but they mostly don't do that.
If it was not for the media coverage, 99% of people would not be able to make any conclusion, whether it is getting warmer or cooler or whatever. Except for the subjective cognitive bias "this did not happen when I was young".
I've seen groups like World Weather Attribution quoted in news articles to justify linking extreme weather events to climate change. Some news sources might not, but I've definitely noticed it happening. This is a group that estimates what percentage of an extreme weather event can be attributed to climate change, check them out - https://www.worldweatherattribution.org/
This chimes in with Scott's thoughts on cost-benefit studies from the last links post. William Nordhaus estimated that the optimum level of global warming would be about 3.5 degrees C. Of course then you realize why nobody else has bothered to do a cost-benefit analysis - it doesn't fit anybody's narrative. Seeing as BAU is taking us in the vicinity of 3.5 degrees, we merely need to carry on doing what we're doing - 3 or 4 hundred billion a year in subsidies for renewables and all will be well. The 'Very Alarmed' simply cannot have that - as Gurdjieff nearly said, people will give up anything you care to mention, except their alarm. And the 'Very Sceptical' also simply cannot have that because it would necessitate accepting that 3 or 4 degrees of warming could be created by our carbon emissions.
You would think that with such a supposedly big problem with vast potential costs, the helpfulness of assessing costs and benefits would be great, but it seems not.
Even capping warming on 3.5 degrees requires substantial investment, though.
I am no expert, but what I´ve read about the topic strongly suggests that while countries formally commited to 2 degrees, there is no real expectation that this goal will be without some unexpected technological (or political) breaktrough. So we will get much higher rise in temperatures than planned, but lower than if we did nothing.
There have been a number of cost-benefit analyses. The most well-known are Stern (2006) and Garnaut (2008). Both found reducing GHG emissions had higher benefits than costs.
I don't know Garnaut, but I know the Stern review uses some quite nonstandard assumptions to reach that conclusion - specifically it assumes that costs and benefits depreciate at 1.4% per year as you project them into the future.
The model Stern (and basically every economist) uses here is that money can either be spent on things we like now, or saved / invested and then we have MORE money to spend in the future. When we're talking about intergenerational transfers of wealth (which we are when we talk about climate change) then everything we spend now preventing climate change 'steals' money from the future that they could use for other things they want. So the question is, how much would money that could be 'gifted' to the future now via investment actually worth when the future rolls around? Alternatively, what do we 'steal' from the future when we spend money now on making the world better for them?
NB - Apologies that 'steals' is a bit of a pejorative term - I couldn't think of another phrase that meant that you're taking money from future generations without their consent, but in this case you're taking the money with their best interests in mind whereas most people who steal don't do so with good intentions.
Stern says that the answer is that we 'steal' 1.4% of the value of the money each year by spending it now rather than investing it. This means that Stern argues that the £100 we spend preventing climate change now would only be worth £400 in 100 years time (so if the value of the improved environment we gift the world is worth >£400 it is positive expected value to improve the environment). However, it is really not clear where this 1.4% comes from; the UK government uses 3.5% as a much more standard discount rate, which would imply that £100 spent now 'steals' £3000 from the future which is nearly ten times higher. Stephen Watson below suggests that the Nordhaus analysis (which I haven't read) uses 5% implying we 'steal' nearly £12500 from the future!
I ended up a big sceptic on Stern - discount rate is almost single-handedly the parameter which leads to his eye-catching conclusion, and yet it is different from the standard value given by the UK government for future planning, Stern doesn't really explain where 1.4% actually comes from and there is inadequate sensitivity analysis around this key number. It wouldn't surprise me if preventing climate change did have a positive cost-benefit ratio, but it felt to me like Stern decided that that was going to be his conclusion and made the data fit his theory rather than the other way around.
Note - discounting is not just about the rate of return on investment, but also about the relative importance we put on future suffering/pleasure compared to present suffering/pleasure. A 1.5% discount rate might be low on financial investments, but it is probably high to put on the suffering/pleasure itself - it seems quite plausible that 0% is the appropriate discount rate for the actual hedonic value.
I think the hedonic argument is plausible, but money really isn't hedonic value, because money can be invested. Even if we assume 0% hedonic discounting, in 100 years we could have 400x the money to spend on whatever hedonic thing we want. So suffering/pleasure doesn't become worth less, but it does become cheaper at a regular financial discount rate.
Sorry, I should add that it wasn't like Stern pulled the number 1.4% out of thin air - he argued (probably correctly) that the discount rate depended on the situation with the global economy, and that the global economy would react to different climate scenarios differently. So Stern didn't actually have any single discount rate, but rather a range of discount rates of which the average was about 1.4%. But then as per my original comment this is significantly less than the UK government projects while looking at the same data, so at least one of either Stern or the UK Government is materially wrong here, and Stern does a less convincing job of identifying where his numbers come from IMO
There's a pretty good chance Nordhaus has a flawed approach to discounting social costs and benefits - when weighing up intangible benefits into the future, he usually discounts them with a rate similar to the economic discount rate, 5% or so. Though there is considerable divergence on the issue, and many think it should be even higher, the most common opinion among experts nowadays is that this discount rate should be zero. The average is 2.6%. https://www.jstor.org/stable/pdf/26529055.pdf
This seems like more of an argument for climate change being good, as opposed to a criticism specifically of either climate charities and policy or say Stripe’s carbon removal pilot program? I share concerns about overreach here but ... this seems like a disconnected and relatively unsupported list of questionable reasons that barely tie together, so idk how useful it is to respond to them all.
1. Not an expert here, but there are petabytes of arguments about why climate change is, in fact, bad for the environment.
2. Not an argument? What reason is there to believe “the balance maybe positive?” That’s just a disconnected statement
3. A couple percentage points isn’t great! Normal policy stuff barely had the ability to increase GDP that much. Free trade agreements individually raise GDP by ~0.01% and that’s also probably the ballpark of most other policies. It’s especially not great for the people and areas the loss is concentrated in. But a x% GDP loss could mean anything from Beijing getting nuked (bad you would think) to the entire fast food, cosmetics, most of fashion, video game cosmetics, most of marketing, MLMs, and those sorts of industries getting outlawed, which would be awesome, to the pandemic, in between. But 3% GDP estimated loss, which probably isn’t that exact, could correspond to something bad. Probably worth directly discussing that instead lol
4. Again, not an argument. Yeah co2 increases ag yields by a decent amount, that is known and taken into account, well or not. Why would that be better than the harms climate change might cause? What are those?
5. Source? What? How am I supposed to understand this? The IPcC predicts lots of Serbs and displacement from climate change, likely on net, how am I supposed to even begin to think about why? Where can I learn about this?
6. Current expert consensus is that solar and wind are already price competitive in some areas. And they will continue to follow a learning curve and decrease. Are they wrong? Why? The GND might be too expensive and might decrease growth, but how much and why? How could it crater GDP by a factor of 4 relative to future?
7. Estimates? Huh?
8. Yes they are kind of dumb. But say stripes program or Biden’s goals don’t seem to include that?
9. Some theories are that solars price competitiveness will lead to worldwide adoption. And that US energy consumption has stopped growing due to much more efficient products.
For all I know, climate change isn’t real. But this doesn’t convincingly argue that.
Look I'm definitely no expert, just a guy with opinions. But I think a lot of people are underestimating extreme discomfort as a reason for concern.
I'm pretty confident general infrastructure advances can mitigate a lot of the bad effects of GW, and the whole "wars fighting over resources threat" always struck me as silly and unhinged.
But it's going to really suck if large parts of the world have unbearably hot summers where you sweat your arse off whenever you leave air conditioning. That might be hard to put a price on, but I'd argue global comfort is on par with ideals like "space exploration" and worth throwing a ton of money into.
Of course we also don't want to make a ton of wild animals go extinct either.
I think cheapish go-engineering is becoming more inevitable every day.
> But it's going to really suck if large parts of the world have unbearably hot summers where you sweat your arse off whenever you leave air conditioning
Right now large parts of the world have unbearably cold winters where you freeze to death if you go outside without heavy clothing. It seems like swings and roundabouts to me.
Even a three-degree change doesn't seem like a big deal to me. An "unbearably hot" day for me is anything above 40C; that happens a few days a year where I live, and maybe would happen twice as often if the weather were three degrees hotter... but that still doesn't seem like that big a deal.
How many people live in places where you'll freeze to death if you go outside without heavy clothing, compared to places with unbearably hot summers? I can't be bothered to do the math, but my guess is that the difference is at least a factor of three. Is it worth it if one billion people are less at risk of freezing to death if it means that three billion people are more at risk of dying of heat stroke?
My guess is a factor of ten the other way round - lots of guesses going on!
I'd add that if you look at the destination of people who move from one place to another, it is generally towards warmer places. People retire to Florida, not Alaska.
Also the vast (mostly) uninhabitated parts of the planet (Siberia, Mongolia, Greenland, Antarctica) are just too damn cold for Human Beings. After all, we live in an ice age!
Another way to look at the graph you linked to is to ask 'What is the temperature of the places people choose to live in?' And the answer is - warm places rather than cold. Near the equator rather than the poles.
Why is the whole of Northern Canada almost completely empty and the Southern United States nearly full??
If we didn't have artificial skin (clothes) and vast amounts of heating, the majority of people on the planet would simply die of cold. Before the discovery of fire, we could never have left Africa.
What is the difference in the energy spent on heating as opposed to cooling?
The average temperature of Earth is about 15 degrees and the optimum for humans is significantly higher than that.
My temperature concerns are generally reserved for the speed of change, not particularly the amount (up to Nordhaus's 3 or 4 degrees)
So maybe I guess the thing to calculate is cost of moving significant numbers of people to cooler areas.
Like sure if we redistributed everyone to Siberia that could be nice. But remember that most of the world population are in China, India, and to a lesser extent Africa. All have very hot climates already. In Hong Kong they have these cool air conditioned bridges between buildings to escape the heat but still get around but that's not a viable solution for most of humanity.
Also I'll add, I live in a hot country and don't mind super hot days either because I work in air-conditioning. But I enjoy camping and it's super important how many days of the year I'm able to go enjoy the outdoors with my kids. Less outdoor time for most of the world seems super sad to me. Even if everyone watching tv indoors doesn't have any concrete gdp downsides
My in-laws moved to Georgia a few years ago, and they originally struggled with the vastly hotter summers. Then they realized that they could just do more outdoor activities in the spring and fall. That's a bit of an issue with kid's school schedules, but Georgia specifically has more time off during the school year (two weeks in October and March or April IIRC).
I hear some people saying the US "should have waited longer" to leave Afghanistan, and other people saying "they waited 20 years, what good would another few months have done"?
Is there some reason the US didn't spend the past year or so evacuating all civilians who they wanted out of Afghanistan? Right now it seems like it would be nice to have another year to evacuate people, but what have we learned that we didn't know a year ago?
Also, are people who aren't evacuated quickly enough really in danger from the Taliban? Why would the Taliban want to kill Westerners and anger Western powers? Isn't it in their best interest to let all the Westerners leave in an orderly fashion?
To your first question - was that not the intention? As late as 12th August leaked reports were suggesting Kabul could fall in 90 days (and that was seen as alarmist) e.g. https://www.bbc.co.uk/news/world-asia-58187410 giving a reasonable timeframe to evacuate. Unfortunately even the worst case scenarios planned for were optimistic.
There will be disputes over things like foreign currency reserves, debt, sanctions and aid over the next months, and the Taliban will need to decide if they want to co-operate with the West on evacuations to show good faith, or go the Iran/North Korea blackmail route and seek any opportunity for leverage (e.g. de facto hostages). Who knows their approach but I wouldnt hang round Kabul to find out
There are already a lot of people dead. Im only seeing stuff about Afghani collaborators and nothing about US citizens, but i have little doubt that the Taliban would gladly kill US citizens if they werent worried about getting caught. If it helps, think of them as the Afghanistan version of the KKK and Know-Nothings.
"She was whipped by Taliban fighters on one attempt to get through, she said. A man standing near her was shot in the head on another try, leaving his wife and baby in tears.
> Is there some reason the US didn't spend the past year or so evacuating all civilians who they wanted out of Afghanistan? Right now it seems like it would be nice to have another year to evacuate people, but what have we learned that we didn't know a year ago?
The military learned that a commander in chief was serious when he said he wanted to get out. They'd been able to persuade every prior President to stay and commit more money and troops.
I only spent limited time around senior leadership but here's my intuition:
>Is there some reason the US didn't spend the past year or so evacuating all civilians who they wanted out of Afghanistan?
Evacuation was never enumerated as a operational priority. Getting interpreters out has been a huge hassle for ages. For evacuation to be a priority someone very senior in the administration has to make that a strategic priority. It's not the kind of thing that happens via inertia. Lastly, I can't overemphasize the level of denial in considering the possiblity & timeline of Kabul falling. And staff officers don't make plans around scenarios they're collectively in denial about.
>Right now it seems like it would be nice to have another year to evacuate people, but what have we learned that we didn't know a year ago?
Learned about what? How to evacuate people? I would guess we're learning a lot about evacuating people and running a Berlin airlift style operation. I would also guess we're going to learn a lot about cohabitating in Kabul with the Taliban where you can't just airstrike them whenever they're visible because they're everywhere now.
>Also, are people who aren't evacuated quickly enough really in danger from the Taliban?
Depends who they are and what they did. Pashtunwali is a real thing so I wouldn't expect some Khmer Rouge style executions. But yea if you were an interpreter with a US partner force then you might get shot. Navigating Pashtunwali is complex and there are a lot of relatively cosmopoltian Afghans in Kabul that would rather not live under a redneck regime.
>Why would the Taliban want to kill Westerners and anger Western powers? Isn't it in their best interest to let all the Westerners leave in an orderly fashion?
I'm mostly checked out of the news so I'm very surprised if this is happening. A running question has always been how monolithic and organized the Tablan is. I have experience at a much more local level and my strong prior is that the new regime will probably be fairly orderly interspersed with some mob violence and maybe some symbolic shows of force - like killing some jews or christians or something. In the category of "a thing you're not allowed to say but is true" would be that the Taliban is often better at governance than the Afghan government. I'm not talking complex policy but just the everyday expectations of local government.
Ignorant opinion here: the moment the US started evacuating people, this would indicate a pull-out, and the collapse of everything (the army literally dropping weapons and scattering, the president filling a car full of cash and heading for the UAE) would have happened sooner. See this story from a pro-Ghani source: https://edition.cnn.com/2021/08/20/asia/afghanistan-ashraf-ghani-taliban-intl/index.html
""In the days leading up to the Taliban coming in Kabul, we had been working on a deal with the US to hand over peacefully to an inclusive government and for President Ghani to resign," he said.
"These talks were underway when the Taliban came into the city. The Taliban entering Kabul city from multiple points was interpreted by our intelligence as hostile advances," the senior official said.
"We had received intelligence for over a year that the President would be killed in the event of a takeover," the official added."
The pull-out or handover or whatever you want to call it was predicated on "there's a government in place, there are armed forces, they can step up and take over while we evacuate" but in reality that didn't happen. So even if the US forces had said "We are leaving over the course of an entire year", the only thing the native Afghan forces would listen to is "We are leaving" and they'd be out the door to save their own necks in seconds flat (and given the entire history of the country, who can blame them? It's a record of everything from "invaded by the Islamists, invaded by the Mongols, invaded by the Mughal, overthrew them and set up our own kingdom, then an empire, oops internal dissent knocked that on the head, to a monarchy, sort of invaded by the Brits they were peddling influence during the Raj, to a monarchy that was sort of reformist and progressive, to a civil war over those reforms, to a monarchy again, to a democracy part one: I overthrew my cousin the king and declared myself president, to a Communist regime, to invaded by the Russians who set up a Soviet-backed regime, to more civil war, to the Taliban, to invaded by the Americans, to a democracy part two: this time for real - yeah right the Americans are going home and that brings us up to date, with the only constant being that as soon as the new regime takes over, the followers of the old regime are for the chop).
I'm also seeing a lot of talk about this was deliberate on the part of some section or other of the American administration, that (take your pick of) the generals or the intelligence agencies or the hawks or the Deep State or the fairies at the bottom of the garden didn't want to leave and deliberately FUBARed the withdrawal in order to damage Biden and lead to calls for US forces to go back in. How true or not this is, I have no idea, there's a lot of gossip and rumour going around right now.
The obvious problem is that our evacuation would set off the collapse. The obvious solution is to have enough U.S. troops on site to hold the parts of Afghanistan that we need in order to evacuate our people for the month or so it takes.
A) Some people will not start evacuating until after the evacuation window has closed. August 16th was not the end of that window, but it does appear to have been a significant wake up call that there were only a couple of weeks left. Less, now.
B) Is that not what we're doing? Everything I've seen indicates that US citizens in Afghanistan are nominally free to make their way to the airport in Kabul (plus or minus the general dangers of ground travel in the area), where they will find 5k US troops running the evacuation effort. That is planned to end with the month, but if Biden extends it I doubt the Taliban will seriously contest control in the short term.
(SIVs are a complicated debacle that ought to be addressed separately.)
I normally disagree with you, but I wholeheartedly agree here. On top of troops, we're pretty good at airstrikes, as well, so I don't see why our only two options must be "stay another 20 years" or "pull out haphazardly, leaving our local allies to die".
In Germany, that is exactly what the current debate focuses on, since a few weeks before the collapse, the government was divided in the question of whether to pull out civilian helpers.
They decided against it because the Afghan government begged them not to pull out, backed by the US and other allies. So for Germany, very concretely, this was *the* reason why they didn't evacuate civilian helpers beforehand.
I mean this is just like timing the stock market. Yes of course the ideal time to cut and run is right before everyone else does. But cutting and running is causally influential to everybody else doing the same.
No it isn’t, it’s like insider trading, which can work. Biden controlled both the time of troop withdrawals and to a lesser but significant extent the timing of US citizen and afghan staff and current refugee evacuation. The second was allowed to happen after the first because they expected Kabul and the rest of Afghanistan to fall slowly. That was a poor expectation, and should’ve been either better understood or just, for security reasons, ignored. And then the second should’ve happened before the first. And US troops are a stronger incentive to not invade than US citizens and kabulis running is a reason to invade
I just don't agree. To speak just about the Americans for the moment, everyone was advised in June that they should leave. Many people did. Think about the mindset of somebody already ignoring their government's advice to leave a warzone. These people weren't going anywhere until they felt personally threatened. Should we have forced them out at gunpoint? I think that would have been bad both for reasons of personal liberty and optics. Could we have sent them another memo, where we somehow convinced them that "no really, you should really leave, but the Afghan government is really totally fine, we think, don't tell any of the locals that you're leaving." I just don't think that's realistic.
As for our local friends, it's really a similar story, but even more obvious to see how we couldn't get them out without precipitating the collapse. This is their home. These people are probably friends and relatives of people in the Afghan government. They have families. There really is no "get these people out but otherwise don't accelerate the collapse of the government." This is magical thinking.
Of course, we could have had the military do that. We obviously could have said "forcibly evacuate all of these people," and gotten everyone out while troops guarded the whole city of Kabul and then evacuated themselves. I think, though, that we'd just be blamed for more directly causing the fall of the Afghan government that way.
This just all seems like Monday morning quarterbacking to me. If there's anything I want to criticize, it's that a 20-year nation-building operation couldn't stand on its own feet for a few days. Proportionally, criticizing the exit seems like a really strange use of attention.
They probably would have left sooner if the US had made clear that Kabul could fall within a week. And many Afghans would’ve left sooner if the us government had offered them transportation, sped up the visa process, and made that clear.
The collapse was gonna happen anyway. If we started with the evacuation, rather than ending with it, that’d be better.
But yeah this is somewhat low significance, but it does demonstrate that even to the end they didn’t know at all what they were doing
1. Evacuating Civilians / People - The people we'd want to evacuate are the people who were responsible for maintaining the civil order we were trying to project in Afghanistan. I heard this from some buddies in country, so take it only as an anecdote, but as troop deployments decreased, compensation for private security over there guarding infrastructure and individuals got increasingly lucrative. As long as the nation wasn't actively enemy controlled, these people have no incentive to leave early and the US had no incentive to get them out early.
2. Are they in danger from the Taliban - probably not... The Taliban's incentive right now is to act as much like a normal government as possible for while, which means doing their best to present a unified front that can maintain domestic control and not indiscriminately slaughter people. That being said - the Taliban has actively begun hunting down people they believe to be "collaborators," non-western afghani's deemed to have helped the occupying forces (so, police investigators, military officials and the like) so they've got good reason to fear for their safety. Westerners are probably safe from mainline Taliban forces... but do you trust every component of the Taliban to have the discipline to hold to the party line of "don't trust westerners?" What about non-Taliban forces inside Afghanistan... how much would you trust the Taliban to protect you against random looters?
My take is that Afghanistan just got drastically less safe overall, and this is what's prompting the flee not some kind of specific threat the individuals are running from.
One consideration is that the bulk of the State Department people are doing a one-year tour, and they are often pretty isolated from the general population. Afghanistan is a pretty undesirable post, generally avoided by State department employees with clout. A bunch of short-term, mildly incompetent bureaucrats, and a mixed bag of incentives. It is not surprising that this ended poorly for so many Afgan civilians. The Americans don't seem to be dying in droves during this evacuation, and this is the main incentive.
There was a zoo in Massachusetts that tried to stop a government budget cut by announcing that such a cut would cause them to kill animals. It didn't work, but the general principle is that to stop someone else from doing X you make the consequences of X as painful as possible for the decision maker. Intelligently preparing to leave would have made it politically easier to leave. The "deep state" didn't want the US to leave Afghanistan and so didn't take the proper preparations. Evidence of the deep state's desires is the surprising amount of media criticism Biden has gotten from his Afghan position, despite the fact that it could cost the Dems the California governorship and thus potentially control of the Senate given the advanced age of one of California's Senators.
The Taliban compete with other Islamic groups for global influence and funding. Killing fleeing Americans might be a "reasonable" PR strategy to use against their Islamic competitors.
It can't. Newsom is only in trouble because of COVID. My personal guess is that a lot of his Hispanic LA base is pissed off at the restrictions that caused their small businesses to implode last year. They might have been OK had it stopped COVID in its tracks, but it didn't seem to this winter, when there was a huge surge in LA.
That's really what it comes down to. He'll keep his white Bay Area FB engineer vote, but the question is whether he keep the brown LA house painter/construction worker vote. If not...he's toast. I think he'll squeak by, but it will be close.
Right now the recall election is polling within the margin of error as to whether Newsom stays or goes. If he goes, it's not totally clear who will win, but for many weeks a right-wing talk radio host was leading the polls (because Democrats were mostly planning to leave that part of the ballot blank) though now there's a left-wing YouTuber who has suddenly jumped in the polls because no one else in the race has a D by their name.
The recall election on Sept 14, 2021. My guess is that a lot of voters lump together all the politicians of a party and so hearing negative things about Biden lowers the opinion of many people of Newsom.
Definitely not an expert, but my sense is that it isn't US (or UK etc) citizens that are in danger, or who might not get evacuated. It's the Afghans who worked for America who are going to have problems.
It's probably a very unfair comparison, but the French who collaborated with the Nazis were treated rather badly when the Nazis 'left'.
Because you might think there's a moral difference between collaborating with Nazis and collaborating with the US-backed government of Afghanistan, and it's quite possible that this moral difference will be reflected the treatment of these people by the new regime (though it's also possible it won't be).
My completely non-expert opinion is that it was an attractor effect. Once enough Afghani powers decided that the Taliban were likely to win, they switched to supporting the Taliban.
So instead of having a protracted attrition war, we ended up with a sudden switch that didn't even wait for the troops to leave. So, ironic enough, you can't even put a number on "how many days the Afghan government lasted by itself" - it's negative.
So there was no evacuation because the official plan was for a long term war instead of an overnight change of power.
But how could the US be so surprised by this? I don't think they were. I just think that the decision makers work at simulacra levels that are decoupled from reality. Once the official plan was to leave, you can't move information upstream that would sensibly change the plan. I just can't imagine how this would work, institutionally. From everything I've seen in the past years, including the pandemic, I just don't think US has the capability of changing committed plans based on object level, competent analysis. When did it last happen?
I recently read an article by the guy who planned the withdrawal for the Trump administration. He blamed the solid September 11 withdrawal date that Biden's team selected. The plan under Trump was to leave sooner, but with a transition-based criteria system. If the Taliban didn't meet their end of the agreement, the US was not going to leave. According to the article, that means the US could have stayed indefinitely under the old plan, but that the Taliban was cooperating in order to get the US out. When September 11 was initiated as the "have to be out" date, the Taliban lost any incentive to hold back, knowing that the chain of command would struggle to reverse, and may be incapable of changing the plan.
I've read other discussions that basically say that the Taliban has been quietly massing outside all the regional capitals in farms and surrounding villages for the better part of a year due to the Trump plan, ready to take everything the minute we had drawn down. Other articles point to the May~October period as the heavy fighting time due to weather constraints, so figuring that regional capitals have been falling for about a month, that build up could have been as recent as March or so of this year. Either way, it looks more to me like the Taliban were playing nice up through about the official Trump withdrawal date and the general govt collapse was predictable after that short of a 40000 troop deployment and about 700 billion dollars worth of new bombing campaign.
That's still in line with the article I was referring to, as everyone knew the Taliban would take over once the US left. What is in question is whether civilians, equipment, etc., would be evacuated prior to that takeover. We'll never knew for sure the alternative world where the Trump plan continued, but there's a plausible scenario where we held to a goal-based timeline instead of an end date timeline.
I just read a headline that says that the Taliban will not announce a new government as long as there is still one US soldier present. It gives credence to Mr. Doolittle's suggestion because even now the Taliban is holding back to some extent because there is still a chance, however minor, that US can reverse a course and attack them. I don't know what kind of deal was with Trump. Depends on details but I can imagine that if they promised something like only a partial participation in the government then that would be the deal that the Taliban would have to accept eventually.
If that's true, the Biden's plan was very stupid. I also think that the US should have stayed in Afghanistan until they were sure that some things they introduced, would stay. Throwing all that was achieved away is always easy, it doesn't require smart thinking or anything. But there are real people in Afghanistan and they deserved better.
Oh come on. Where's the National Socialist German Worker's Party these days? What happened to Tojo? The Brits certainly fixed the Boer's wagon, home guerilla advantage notwithstanding. There are no Nationalists left in mainland China, and no followers of the late President Thieu bombing the occasional railroad bridge in Vietnam.
It is certainly possible to win against a weaker opponent even if he's got the home-ground advantage, can melt into the countryside, is willing to live in caves and eat rats and don suicide bomb vests. It's entirely possible to wipe those people out, root and branch.
But it requires focus and commitment, and quite often some pretty ugly decisions. You have to be damn sure that's what you want to do. Being half-assed about it, and not entirely certain what you're trying to achieve definitely doesn't work, and never has.
Nobody who's paid attention to American politics for more than 25 years is surprised that a plan of Biden's was stupid. They guy has been somewhat of a joke for the past umpty-five times he's run for President -- and that was when he was younger and faster on his feet. He's always been just a good ol' boy Irish bullshitter, with very little of a clue. It's kind of stunning that he eventually made it over the top, and a real testament to how many people ended up disliking Trump, and how chaotic the Democratic primaries were in the midst of COVID. Even the guy who picked him for Veep (Obama) thinks Biden's a lightweight and pointedly declined to endorse him as a successor in 2016.
> Is there some reason the US didn't spend the past year or so evacuating all civilians who they wanted out of Afghanistan
I assume that people who made such decision intended to abandon most of local Afghanistan people who helped US forces (interpreters, cooks, translators, etc etc).
People on twitter have been saying that a) the State Department issued advisories months ago telling Americans to leave, but can't compel them to and b) repeating Biden's answer that starting mass evacuations of Afghans would lead to a "crisis of confidence" in the new government - essentially admitting defeat without a fight.
>a) the State Department issued advisories months ago telling Americans to leave, but can't compel them to
This is the part I keep coming back to - nobody went to sleep in the US and were shocked to find out that they somehow woke up in Afghanistan. The official State Department Level 4 "leave on a commercial flight ASAP" advisory was April 27th. By June 15th, the boilerplate language had been updated to:
>If you have concerns about your health or safety in Afghanistan, now is the time to leave. Commercial transportation and infrastructure are intact and operating normally. Strongly consider this option. If you decide to remain in Afghanistan, carefully consider all travel and limit trips only to those that are absolutely necessary. Given security conditions and reduced staffing, the Embassy’s ability to assist U.S. citizens in Afghanistan is extremely limited.
If someone chose to stay after that, they were signing up for the uncertainty of whatever came next. To the extent that those people now regret their actions, that is not a failure of US planning.
The problem with the "crisis of confidence" is that the US intelligence community knew the Taliban would retake the country. If we knew it, surely the locals did as well. You can't count on the Afghani soldiers holding out for a few months and then giving up, when they know that's the end game plan. They have every incentive to switch sides and avoid fighting at all, maybe getting a place in the government. It's one thing to think that the new government would hold, and then asking the Afghani military to do their part. It's another thing to know that it wouldn't hold, and still expecting the military to fight under those circumstances.
People do fight for the lost causes, when they believe in them. The US establishment bristles at comparisons to Vietnam, but in this aspect they are very apt. The Afghan government was an obvious sham, thoroughly corrupt, incompetent and alien to the native culture, only propped up by the US dollars and military presence.
However, publicly admitting this would be tantamount to declaring that decades-long American policy was a colossal mistake, so the kayfabe had to be maintained in defiance of common sense and humanitarian concerns.
It was a colossal mistake but theoretically they could have supported more grassroots government even if it meant to include the Taliban. It seemed that the US pushed too much their values. The US embassy in Afghanistan had a LGTB support announcement very recently. If you think about how controversial it was just a few years ago even in the US. I am sure even the most progressive Afghan society would be still very biased against those things and we pushed too much of those and too little of basic human things, for example, that better court system, better economy, better healthcare and even education is beneficial to everyone, including the Taliban itself. Their are many Islamic countries who uses all those benefits while being very conservative like Saudi Arabia etc. It is not what we think is optimal but it is much better than Afghanistan. But we had no such perspective, we didn't value what they could achieve in those aspects and therefore we threw away everything.
I'm not confident I know how the US intelligence community works well enough to say who knew what, and when. But by analogy from covid-19 response, I'd say that there may be a wide, perhaps insurmountable gulf between "someone in some government agency predicted X would happen" and "the government knew X would happen well enough to operationalize that knowledge". I think it's too easy for politically inconvenient knowledge to get buried especially when leaders have incentive to ignore it and listen to the other advisors who claim to know differently.
Can you find anyone in a serious position within the US government who thought the Ghani-led government would remain in power after the US left? I've seen lots of predictions about how long it would hold out, but nobody seriously thought they would be around a few years from now. Most that I've seen were hoping to make it to the end of the year. Even a year ago the Taliban had taken control of so much of the country that the Ghani government couldn't be said to actually be in control of the country anymore. The steady Taliban gains throughout 2021 should have made that obvious to people months ago, when there was still time to make changes to the plan.
I used a program where you type in something and an artificial Intelligence makes art based on what you typed. I typed in a bunch of movie titles and made a quiz. Can you guess the movie titles based on the art? This will test how well you understand how an AI thinks. Or how many movies you watch. There are no sequels or prequels.
Did you just type the movie titles? It looks like the AI has seen lots of the publicity art for these movies online, and has only been diverted from that in cases like Gladiator, Cleopatra, and Elephant Man, where the words in the movie title are common enough outside the movie that generic images took over. (I'm a bit surprised that Labyrinth is so associated with David Bowie's hairstyle rather than a Minotaur though!)
Just the titles, though it produces a series of images, and I tended to choose the easiest image. Except sometimes most of the title would appear as text, too easy.
In general I'm very sympathetic to Scott's views on education. I don't believe we've got any system that teaches kids more than basic numeracy and literacy. But I'm curious which of the following people think more plausible
1. People are plastic and develop intellectually in any stimulating environment. If they focus their attention on one area they'll develop pretty much to their potential in a few years.
2. We're just awful at teaching anything and getting people anywhere near their potential.
With than in mind, what level would Magnus Carlsen reach if he picked up chess at 15? Johnny von Neumann math at the same age? Simone Biles gymnastics at a decrepit 10?
You can begin to sketch an answer for (2) with birth month data among Canadian players in the NHL. Since youth hockey might start at age 6, the arbitrary calendar cutoff pits 6.99 year olds against 6.00 year olds. Consider the following toy model:
OT = -c1 * (23 - a) + c2 * t + c3 * r
Where 'OT' is observed talent, 'a' is age, 't' inborn talent, and r is both the quality and quantity of training received. This assumes that you will come into your full hockey powers at age 23. If c1 is big enough compared to c2, and r is zero, the difference in observed talent between a 6.99 year old and a 6.00 year old is huge even if they have the same inborn talent.
The fly in the ointment here is that training is a scarce resource. There are only so many good coaches, good opponents, and ice time. As a consequence of this, we give out the best training resources at age n to those who were better at age n-1. And this tracks all the way to the NHL level where Canadians born from July to December make up just 42% of the players. So even if Carlsen had it in him to pick up chess at 15, if he needed any help at all to do so, it would have been very tough to get. But this would have come from "The System" not Carlsen himself.
One could make the argument that we should not structure society in such a way that if a talented person fails to board their train by age 6 it's impossible to catch up. But that's not the one we've built.
I see your argument, but it also seems completely consistent with (1). Like there are plenty of people out their with the talent, and they could be developed even at advanced ages, but our sieve system doesn't admit that.
It seems like more of a criticism of the system than a means to choose between 1 and 2.
Largely (1), in my opinion with some nod to the general interest level of the student in question. Honestly, this is sorta what Mike H and Marvin and the rest of the responses seem to be saying -- (2) is true if you're asking can we force knowledge into someone and for everyone generally interested in learning new things, (1) is a reasonable description. Speaking from personal experience, I rose to about 80% level of whatever group I was in, which is really really really good when you're around some incredibly brilliant people, but it took time in that environment and once away from it, it hasn't terribly improved. That said, looking back on how easy it is to comprehend new math after a master's in engineering, I think there's a certain amount of repetition and reinforcement that good pedagogy can really help with in people who aren't actively resistant.
I make this argument all the time, occasionally with people in the public education system. Weirdly enough, they admit most of this stuff, but their primary defense of education as a system is that for apparently a large percentage of these kids the 8-10 hours of stability they get at school is literally the least dangerous and potentially healthiest time in their day. That all the education is just a smokescreen to get kids out of largely bad situations and, hopefully, allow for intervention in the case of the absolute worst examples.
My guess is that the reason our system exists is some percentage this, some percent the benefits of free childcare for workers.
There was a comment in a recent thread along the lines of "Modern public schools teach a wide variety of topics so that kids can figure out what they want to do later in life. The things they want to learn about they practice and get better at, everything else they forget." That rings pretty true for me, where I can remember some math, some grammar, etc., but mostly I remember the things that I took further in college or had a natural interest in.
It's an expensive sorting system, where we value the ability of every (at least a strong most) student to get a feel for the options available and select between them. Not everyone is very good at those things, but they have the opportunity to look at them for both occupational and hobby interests. By the time they're 18 or so, they have been mostly sorted and the sorting seems to be pretty good.
If you look at K-12 education in that lens, it seems to do a pretty good job. If this hypothesis is true, then both #1 and #2 from your list can be simultaneously true, but maybe irrelevant to the goals of our education system.
I think the sorting is actually quite bad. You still have lots of people who go to law school, for instance, because they're not sure what else to do. They then find they hate being a lawyer. Most of the things you do in school have little to do with real jobs. There are also tons of kids who do know what they're interested in but have to spend time taking lots of other subjects they don't care for. While I think math is cool, I think 3-4 years of it in high school for people who don't care for it is ridiculous as a sorting mechanism. The whole system rejects the idea that kids should be able to do what they're interested in.
I don't really disagree. The primary issue is that schools rarely coordinate with employers, so they don't align expectations. They prepare EVERY student to be able to do just about any introductory field, whether that makes sense for the person or not. I'm seeing a shift in emphasis towards technical fields, which I think is positive, but it's not really enough.
Current public schools tend to do a good job of helping students realize what they enjoy doing, but a poor job of aligning that desire with actual jobs and in demand fields.
What I know of current research, there is very little parents can do _on purpose_ to significantly influence their kids. I think at most is stuff like "basic interpersonal skills", which is non-inherited and teachable. Otherwise it's genetic, home environment and peers, which yes, all depend on parents, but aren't really easy to change at will. You are who you are, and you kids will learn from you.
So what about László Polgár? I think these are extreme cases where you have much higher than normal competence and expended effort, and which probably can deliver results. But the fact that they exist doesn't mean there aren't sharp decreasing marginal benefits on effort.
So I don't think it's ok to generalize from such cases, (and I think there's an inherent bias that will tend to make parents treat their kids as special anyways). Most likely correct answer is 1, for the current state of educational science. Might it change in 50 years? Hopefully, but I doubt it's around the corner.
I know there's good evidence for the limited influence of parents on IQ and personality. But is there similar evidence on say, the ability to play and instrument, ride a bike or speak a language? These may sound trivial, but are much more relevant to the discussion.
I chose extreme cases because if they didn't exist then it's almost impossible to reason about 2. If everyone got average results then it's impossible to draw any conclusions about the state of our teaching. If some get spectacular results it's worth asking why.
I'd go for 2. Also note that most of the people being taught are heavily resisting it. Simply selecting for lack of resistance should already have a major effect (cf. Mike H's example of Khan Academy). As a university level teaching assistant, my teaching approach always emphasizes that learning is an activity on part of the student. I believe the same would hold at other stages of education, except that the students are often not ready to actually take responsibility for their own learning. Therefore, effective learning requires a certain amount of intrinsic motivation on part of the student. Schools offer plenty of extrinsic motivation to compensate, but that risks getting Goodharted. (in fact, in plainly does, see "teaching to the test") So, I think it is questionable how much improvement would be gained by starting the learning path at an earlier age if this motivation has not yet developed. In general, it follows that it is more effective to let young children "play around" with advanced toys ("hey kid, check out this Commodore 64"), rather than putting them on a lesson plan ("Ok, let's start with lesson one of "learning to program with the Commodore 64" ").
My own example would be how I learned to read English (I'm not a native speaker). Basically, I learned it by playing Pokemon Gold. I sometimes daydream about "Pokemon Ruby: educational edition". This hypothetical game would not force learning, but simply increase learning opportunity by adding an inbuilt dictionary, gradually increasing the difficulty of the language as the game progresses, and perhaps add some direct rewards for mini-games based on language (something in the flavor of the trick house from Ruby would probably work). Another example would be the "Donald Duck" comics (or at least, the Dutch version). Since the comics are pretty easy to follow without reading the text balloons at all, the text inside does not shy away from more advanced language, and uses a large amount of figures of speech. (the frequent reader of the Open Threads may recall that the Dutch language is enriched with many beautiful figures of speech, such as "Zoals de waard is, vertrouwd hij zijn gasten" ("The host will trust his guests as much as (he is trustworthy) himself"))
In general, it seems there is a lot of untapped potential in this space of "non-forcing" educational games. Young children have a very good learning ability, but lack the discipline to focus it towards ends that are known to be productive. Let their learning river flow downhill, I say, rather than trying to direct it with channels.
I wouldn't go that far. Schooling as currently practiced often does, but I don't think any far reaching pedagogical reforms would be nessecary prevent destroying motivation. Rather, I believe the problem is that obtaining motivation for useful (in the sense that they will be useful after the schooling has finished) topics in the first place is a difficult problem to solve, and even more so en masse in the school setting. Solving this problem does require more radical action, I believe. I don't think letting the kids have some choice over more a traditional curriculum is enough, as both freedom of exploration as well as useful guidance are nessecary. This demands an amount of attention that I think is not scalable, unless we can digitize and "gamify" it effectively.
There's some evidence for (2), namely the results seen by home schoolers and the Khan Academy. But I talked to a working teacher once, a friend I grew up with. He knew all about Khan but wasn't particularly impressed. The problem with a lot of cool ideas to improve education is they end up being tried on people who were already motivated and 'special' in some way. Once you truly accept that 'teaching' is 25% imparting knowledge and 75% daycare/pacification of the child, then questions about how good we collectively are at teaching look quite different.
Have any non-Europeans had experience traveling to the EU during COVID?
I know you can get into your first European country by showing proof of vaccination and a negative test at the airport, but what happens if you want to go to another country (eg from Portugal to Spain, or Spain to France)? Do you have to get another test? Or do they assume anyone coming in from friendly European countries is okay?
I'm a US citizen. I flew from New York to Amsterdam in July. I had to have a negative PCR test to board the plane. No one cared about my vaccination status (I am of course fully vaxxed). In Amsterdam I changed planes to go to Tanzania. There I had to take another covid test to exit Kilimanjaro Airport. It was pretty much just a rip-off. I certainly didn't get covid between New York and Tanzania on planes that had a negative covid test as an entry requirement. Anyway, it doesn't directly answer your question about one EU country to another EU country, but it is a case of an EU country to another foreign country.
I traveled to Austria a few weeks ago, and then to Spain a few days ago. I managed to get into Austria (connecting in Germany, if that matters) just with my vaccine card, but in order to get into Spain I had to get tested (which is really easy in Austria).
Remember that you also need to get tested to go back to the US, regardless of your vaccination status, and that Europeans in general are still barred from the US.
Each country has it's own requirements, which you can usually find very easily by searching for "{country} covid entry requirements" and you will find some official government website. You need to present vaccine or other proofs whenever crossing a border, and some countries require you to fill in an online declaration before arriving.
NB: I went to Greece a few months ago and nobody was really checking the PLFs. You could wave anything that looked like a QR code at the border guards and that was sufficient. I don't know if that was a fluke or normal though.
There is no "Europe" w.r.t. COVID. Every country has their own constantly shifting policies that change on a day to day basis. EU doesn't control health policy, despite some attempts to do so like the vaccine purchasing programme. Given how badly they flubbed that it's unclear whether more health powers will get transferred any time soon.
As European, this depends on the country you are travelling to. I had to do a test for some countries, while for others showing proof of vaccination was sufficient. I don't know how this is for non-Europeans, but I imagine this is the same for you.
For +-30 year-olds, what is the better choice for Covid vaccines (we have J&J and Pfiser available to us)?
The primary consideration would be protection against possible long Covid, but other considerations (including potential side effects, infecting older, vaccinated, family members, etc.) are also factors.
Personally, I favor the mRNA vaccines (Modern and Pfizer) but only because I'm older and it's possible I've already been exposed to the Ad26 vector of the J&J vaccine -- which could make it much less effective. I think if you're younger that would be less of a concern because you've met fewer adenoviruses.
I think you've mistaken what that paper is about. It's a discussion of the possible factors that underly a stronger antibody response in people who have been *infected* and then given a vaccine. It's true at the end they surmise in one sentence that a mixture of vaccines "may" have similar effects, but they offer no detailed argument for that, nor point to any actual real-world evidence of it.
I would be cautious about reading too much into an Oxford study saying that the Oxford vaccine is fading less quickly, especially given that for all the charts, the highest confidence line of each vaccine is still outside the error bars of the other, and there are lines within the error bars of each one that have higher or lower slope than the highest confidence line for the other. It does seem suggestive, but I would want to see actual studies showing an actual period where mRNA protection is lower than adenovirus vector protection before being confident that this difference in fading is real.
Andrew noted that it was expected beforehand that would be the case, and it wasn't clear to me why that would be a priori (admittedly, I hadn't heard of either mechanism prior to COVID). It might be related to the fact that some vaccines are designed to optimize just antibodies and not T-cells:
Please stop spamming it in every thread. Also, "content across tech, finance, art, media" is hopelessly vague - no suprise that you need to resort to spamming.
I know nothing about the research an AI technically. I have no training in computer programming or mathematics so I will admit my ignorance. But I have a question. Is there an algorithm for what I will term
the will to live??
Bear with me.
Imagine your life under the most intolerable circumstances. You are in a concentration camp, you are in solitary confinement and subject to random torture and other abuse. The rational conclusion seems to me that you should just get out of it anyway you can. But human beings don’t seem to behave that way.
You don't need a fancy AI to implement "the will to live". You could make a really simple robot car that would roll around on a tabletop, and avoid falling off the edge -- and you can build it out of purely analog components, no computers needed. This car arguably has as much "will to live" as a mouse (I mean, the furry kind, not the computer peripheral kind).
"Algorithm," isn't really the term you want here. The algorithm isn't the thing which is alive or dead, so an algorithm alone cannot compute, "the will to live."
As we move from narrow intelligences to more general intelligences, we tend to turn our systems into "agents." An agent has some physical substrate. In order to effectuate self-preservation, the agent needs to have a model of what physical form it exists as, and then it can make decisions which avoid its destruction. Many narrow intelligences already have agent-hood and self-preservation. Take a self-driving car: it knows its physical dimensions avoids crashing into things. You might say a self-driving car has a will to live, as we keep seeing them make decisions which avoid their destruction.
Even digitally, many of OpenAI's video-game playing AIs exhibit this. They move away from hazards and towards the end of the game. The engineers create some scoring function in order to evaluate the AI's performance. In order to get the AIs to do something rather than nothing (because in some games, not moving will never win the game but it will avoid death), time was made to negatively impact the score: not only should the AIs move towards the end of the game, but they should do so quickly. From this time pressure immediately emerged suicide: if the AI was unsure it could make a certain amount of progress without dying, it might immediately kill itself to avoid the time penalty of trying. Obviously they had to come up with more sophisticated ways to score progress from then on.
So in general, is some analog of "the will to live" present in our systems? I think we generally just call it "self-preservation" but yes, if we want our agents to survive for very long, we generally need to design in some self-preservation. It's not really an algorithm, more a quality of the agent.
There is a well understood scheme, explaining how properties similar to "will to live" are emerging in a smart enough goal directed agent.
Agents will do actions that increase the chances to fulfill their goals, score more points on their utility metrics and try to prevent the actions that decreses it. In most cases being active increases the chances of success therefore "will to live" is an instrumental goal for most agentsю Obviously there are exceptions. If whole goal of an agent is to have a cup of coffee delivered to you and someone is already bringing you some coffee, such agent wouldn't mind being shut down. Special environments can even motivate agents to commit suicide if it is somehow the best strategy to accomplish the goal.
Humans goals are complicated and multidimensional. So it's not that easy to say what will be the optimal behaviour in your example. Some people may value the tiny chance to escape and accomplish their goals in the future more than they dislike all the torture and abuse now. Some may not. Some will commit suicide, some will not.
It's also easy to understand how humans can sacrifice their lives for some greater cause, or knowing that this sacrifice will help accomplish their goals, or how people die in peace, knowing that their goals are accomplished. It seems to me that the difference between AI preservation as an instrumental goal (will to score) and human will to live is only the difference in the utility functions.
For the purposes of your question there are <furiously handwaving> two kinds of "AI" using the term in its modern context.
1. Pattern matching/generating AI. This is the type we actually use in the real world for useful tasks.
2. Goal-oriented agents. For example, non-player characters in a video game.
The line between them can get a bit fuzzy because often the pattern matching AI is wired up to some IT system that's making decisions (semi-)automatically. But from an AI perspective they're different.
Pattern based AI is just a pure algorithm. It has nothing that could be described as will. It outputs a guess about something, usually with a score or probability attached. No decision is made at any point and the AI cannot learn from experience whilst working, instead, all learning is done via a separate training process that outputs a new version of the AI. If you don't run the training process and replace the AI, it will keep making the same mistakes forever.
Goal oriented agents are the type of AI that DeepMind focus on. It's the type that Hollywood mean when they make movies about robots that kill their masters and try to take over the world. Agents are closely related to the video game world. Given an objective expressed as a 'score function' or 'specification function' it will learn to maximize its score using whatever strategies are available to it. Building agents turns out to be very hard because it's easy to create an scoring function that doesn't quite express what you really meant, hence this hilarious spreadsheet of AI fails where the AI learns to achieve its task in ways that the designer didn't anticipate:
This type of AI has no specific "will to live" but it does have a "will to score" and usually to get a higher score, it must be alive. In other words such an AI will resist attempts to shut it down. This leads to the field of AI safety and all the subsequent discussions you'll see on Scott's blog about whether super smart AI will take over the world in future.
Note that the differences between "will to live" and "will to score" can be very subtle. In one famous example of specification gaming an agent being trained to play Tetris learned to pause the game indefinitely a split second before it was due to lose. As its only purpose was to play Tetris, this can be seen as a strange form of suicide.
Is this some kind of GPT-3 generated comment? If so, it's disappointing.
If a human wrote this, then I would say that your use of the word "algorithm" doesn't make sense in this context, and you should consider rephrasing without that word because with it, it's incoherent.
I started taking 400mg of SAM-e, based on the discussion in the depression post. I am seeing pretty dramatic results-- depression gone; much less anxiety; much less rigidity, ie more flexibility, more 'openness to trying new things'; also greater motivation, or, perhaps, less of a hump to get over to get myself to go do something; and, last but not least, greater happiness, and more enjoyment in life. A real life changer. Thank you.
I had the same experience about 10 years ago. The first 3 days frankly felt like I was on a low dose of MDMA. Then it leveled off, but still lifted me out of a pervasive depression. I stopped taking it after a few years. Occasionally I've tried it again, but it never seems to have anywhere near the effect of that first round.
I'm curious whether anyone caught Tesla's "AI Day". I did. They showed off their neural network, explain how they convert 8 camera video feeds into a 4D vector space, then have a planning engine on top of that they decides what to do. They also showed off their new from-Silicon up "Dojo" Neural Network training computer that is optimized for the task.
They also showed a new Tesla humanoid robot, and said they plan to build a prototype next year. The idea is that it can do basic and boring human tasks. Elon casually mentioned that long term, he supports UBI.
I'm curious if this causes any updates in people's thinking about AI risk timelines, and also people's thinking about UBI in general.
Coincidentally, I read a very sad news story today about a fatal crash involving a Tesla on Autopilot mode (the self-driving but not really mode).
The fault clearly is down to human error (the guy was driving on a back road at night, he was talking on his phone, he dropped the phone and bent down to look for it, is it any wonder the car crashed into another?) but equally obviously the family of the dead person are suing Tesla because that is the entity here with deep pockets.
The story is surprisingly negative, it contrasts what other car manufacturers do to what Tesla is set up in such situations (i.e. making drivers keep their damn eyes on the damn road) and has one quote from Musk a few years back that reads badly in this context.
So I think whatever about AI risk, this demonstrates yet again that the big risk is not from the AI, it's from the way humans use it.
"One of a growing number of fatal crashes involving Tesla cars operating on Autopilot, McGee’s case is unusual because he survived and told investigators what had happened: he got distracted and put his trust in a system that did not see and brake for a parked car in front of it. American Tesla drivers using Autopilot in other fatal crashes have often been killed, leaving investigators to piece together the details from data stored and videos recorded by the cars."
"It’s hard to miss the flashing lights of fire engines, ambulances and police cars ahead of you as you’re driving down the road. But in at least 11 cases in the past three and a half years, Tesla’s Autopilot advanced driver-assistance system did just that. This led to 11 accidents in which Teslas crashed into emergency vehicles or other vehicles at those scenes, resulting in 17 injuries and one death.
The National Highway Transportation Safety Administration has launched an investigation into Tesla’s Autopilot system in response to the crashes. The incidents took place between January 2018 and July 2021 in Arizona, California, Connecticut, Florida, Indiana, Massachusetts, Michigan, North Carolina and Texas. The probe covers 765,000 Tesla cars – that’s virtually every car the company has made in the last seven years. It’s also not the first time the federal government has investigated Tesla’s Autopilot."
Hmmm. Now, I'm not saying there isn't something wrong with Tesla's autopilot, only that nothing in the above post should be thought of as being equivalent to "tesla's autopilot is unsafe compared to humans". It is very unlikely that any self driving system will be 100% safe. Thus deaths will occur. It is almost certain that the type of deaths will be different to those we are used to.
Personally, I'm interested in my absolute risk of dying in a car, not the particulars of how that occurs. So, the real question is: on a per 100,000 km basis, which is safer? Me driving a Telsa manually or letting the autopilot?
By all means figures out why flashing lights freak it out, but that is kinda not the real point, is it?
The Tesla website has some pretty big numbers that sound pretty compelling. https://www.tesla.com/VehicleSafetyReport (I know autopilot engaged does not equal reading a book in the back seat)
So far using "self driving" in cars appears to be about the same risk level as driving slightly drunk: as long as everything is normal, you're OK. But if something weird and unpredictable happens, you've got a problem.
And of course, the usual argument for why we need self-driving cars is because humans occasionally drive drunk, sleepy, et cetera. If the self-driving cars can only handle themselves confidently under the conditions a drunk or sleepy human could...meh.
Boston Dynamics spent over 20 years figuring out how to make a bipedal robot move as gracefully as a human, and Tesla thinks they can do it in a year? Good luck with that.
And that's just for moving around, in a controlled industrial environment. Musk wants a robot you can tell "pick up that wrench over there and tighten that bolt," which is several steps beyond that.
I've long stopped believing (if I ever did) any of Musk's announcements. He's a modern-day P.T. Barnum who is a master of hype and what will get the public excited, delivery of same is a completely different matter.
I have to admit, SpaceX seems to be working, so that is in the plus column for him. But autonomous robots that will do the grocery shopping for you? Starting from this time next year? I'm more inclined to put my faith in "I need a new pair of shoes, I think I'll see if there are any leprechauns around to make a pair for me".
My partner recently got a new pair of shoes - we've been into minimalist shoes for years, but he tried this new company, and their shoemaking elves had great service: https://www.softstarshoes.com/about-us#meetus
To be fair, he did not promise robots that do grocery shopping, or really anything more than _maybe_ a prototype built next year.
I think Musk's successes go beyond SpaceX. Tesla is overall looking quite positive and has forced change in the auto market towards electrificiation soon than would have otherwise happened.
While I personally think Musk is a dumb idiot asshole who lucked into his fortune and mainly sponges off of actual talented people, Tesla took a bunch of technical risks that look like they payed off/will pay off big, and having him as a front man helped massively.
Yeah, after how many years? of promising totally self-driving cars, now it's fully general ai-driven human robots. This, like 90% of the announcements coming out to the company, is nonsense aimed at investors and branding.
Not for me. The more I look at UBI the more support for it looks like a kind of performative "ok socialist" peacock routine.
A UBI is basically welfare++ rebranded in a form that's sufficiently extreme and thus, distant in the future, that it appeals to utopians who don't think about it too deeply. There's nothing fancy about it. It's just what we have today, amped up to 11. It would pose the exact same problems the existing welfare system poses, but much worse. In particular, the level of government money printing in most countries combined with the unrealistically optimistic state pension projections, strongly implies they cannot actually afford today's welfare systems, let alone welfare++. Because UBI is just a new re-proposal of socialism, most of the discussions around UBI look old and tired despite the shiny new gloss afforded by giving it a three letter acronym. For example, UBI schemes are often posited to let people become better people by focusing on things that are hard to make profitable (which - left unstated - are implied to be better things). Real world UBI trials don't bear this out however, and when they fail, the explanation is always that it would have worked if only they'd UBId harder. That's the exact same explanation you hear when asking those on the hard left why communist countries failed: they just didn't do it right!
Musk likes talking about UBI because he likes robots, and mentioning UBI lets him avoid thinking about the economic dislocation caused by robots without needing to do anything near term that would cause smart people around him to rationally object, like actually campaigning for more welfare. It's hard to feel like it's more than that.
The main problem that welfare has that UBI doesn't is means-testing. Making people fill out lots of paperwork and wait a while for the system to decide if they're a Deserving Poor or not adds a lot of friction to the process and increases costs compared to simply mailing out checks.
(There's "means testing" of a sort at the other end when taxes are due and some people get taxed more than they get in UBI, but the IRS already does that for everyone so there's no increase in bureaucracy. It also prevents any sort of "welfare cliff" since a progressive tax system doesn't work that way.)
Like, I don't know what articles you've been reading, but nearly every one I've seen on UBI has included a few reasons why it's better than simply putting more money into the existing welfare system.
I've read the same articles but they don't make sense.
The idea that administration overheads are so large that they could pay for UBI crops up frequently, or occasionally in the weaker form you're proposing here, where maybe the overheads can't cover the additional costs but they're still very large. This claim is never backed up with evidence though, it's just asserted. Here's an extensive discussion that seriously undermines all such claims:
However you don't really need a paper to see this. Consider that UBI must still have very significant administration and means testing costs. If it didn't then nothing would stop fraudsters creating fictional people and registering for UBI. Also problematic, nothing would stop tourists, people resident elsewhere and so on from registering. Thus a lot of administration work must go into establishing that a claimant is (a) real, (b) not dead, (c) probably of an eligible age (d) not in any debt especially to the state (as debt holders should be serviced first) and so on and so forth.
The case for UBI always seems very weak, as if people backing it haven't really sat down and spent time thinking about the precise details of how it'd work and how much it'd cost. Even ignoring implementation costs, the benefits are often quite wishy-washy and naive/idealistic, of the form "lots of people would become poets and artists".
>The case for UBI always seems very weak, as if people backing it haven't really sat down and spent time thinking about the precise details of how it'd work and how much it'd cost.
Plot the net of taxes & transfers which would be replaced by implementation of a UBI (i.e., means-tested welfare programs which would be shuttered) against pretax income. Perform an OLS linear regression against the data. The y-intercept is the UBI amount with approximately no net budget impact.
> The idea that administration overheads are so large that they could pay for UBI crops up frequently
Nobody has ever claimed that overheads would pay for UBI completely, merely that eliminating the overheads of multiple programs and the political wrangling that comes with it frees up more funding to actually help people.
> However you don't really need a paper to see this. Consider that UBI must still have very significant administration and means testing costs.
It doesn't, that administration cost is *already being paid and will not go away* because it happens with your yearly tax return. Pay everyone a UBI, at tax time they declare what they actually made, and you recoup the UBI above a certain threshold.
I feel like a lot of people really overcomplicate this topic.
You don't need heave administration and means testing to ensure that the recipient exists and is a living citizen of your country.
The problem with administration and means testing is in ensuring that someone who makes ~$10,000 a year actually has the paperwork to prove that they're not secretly making ~$200,000 a year, and that filling out this paperwork is one of the reasons so many social benefit programs fail to get to their recipients.
I'm not sure how much easier it is for a person living under a bridge in Austin to prove that they exist and are a citizen than to prove that they make less than $200,000 a year, but the claim is that it's enough easier that they're more likely to actually get the benefit.
Another problem with means testing is that those on welfare have little incentive to take a job that doesn't pay substantially more than welfare. To me, that seems like the main benefit of a UBI.
I spent a couple years "Not Working" 'cause if I took a job, I would suddenly have to pay 15k+ for shitty health insurance with a bigass deductible, and that is with Obama's shitlib credit bullshit.
>Thus a lot of administration work must go into establishing that a claimant is (a) real, (b) not dead, (c) probably of an eligible age (d) not in any debt especially to the state (as debt holders should be serviced first) and so on and so forth.
In the UBI proposals I have seen, the individual income taxes get raised to the extent the effect would be net zero on median* tax payer. In most countries similar get done anyway by tax authority, citizenship / electoral roll records, any existing welfare benefits, school system, military draft system in the few countries that still have them, and such. If you already have system for that, adding UBI instead of means-tested welfare isn't much of change.
* Or some other percentile point, dependent on how socialist dreams one has.
The humanoid robot thing seems pretty dubious. But it is a good distraction to change the news away from Tesla getting slapped with some investigations from the Feds over the Full Self-Driving Program.
Musk likes to trot out Cool New Stuff whenever there's bad news afoot for Tesla or SpaceX. It gets reporters talking about how the Cool New Stuff is cool, or stupid - but either way they're talking about it.
This seems overly cynical. Musk likes to trot out Cool New Stuff because Musk is the kind of guy who really likes Cool New Stuff.
I don't know why people, even the sort of people who usually like this sort of thing, have to be so cynical about Musk. There's seven billion people on this stupid planet, and sometimes it seems like Elon is the only one who is actually doing anything worthwhile. In most other contexts, people seem to understand that big technological bets are hard, and that if even 10% of your big bets succeed then you're doing pretty well. But when conversation turns to Musk, who has a much better strike rate than just about anyone else while also pursuing more ambitious goals than anyone else, all you hear about is the fraction of his projects which failed or else weren't delivered on time.
> sometimes it seems like Elon is the only one who is actually doing anything worthwhile
This sounds like Musks' reality distortion field hiding everyone else who is working on these same problems in a less flashy, and often equally or more effective, way.
Well his two main things, long-range electric vehicles and reusable rockets, do seem to be the best in-class, and he has the benefit from saying he was working on them before anybody else. It seems like whether you think people are drinking the kool-aid depends on how much you think people are basing their perceptions on his many side projects.
I understand why people, probably the very people who work at Waymo or some other brain-interface company, get mad about being overshadowed by Musk's flashiness, but I don't think it's a legitimate gripe and I observe many of them take their jealousy too far. Getting the public excited about science and technology is important, and if it isn't being valued appropriately, it's not enough. Most companies don't appear to want Musk-level publicity, and that's fine.
I don't know much about the brain-interface stuff. My biggest interest that he talks about is transportation technology, and with both Hyperloop and Boring, his main goal seems to have been attracting attention away from established alternatives in order to preserve the market for automobiles rather than high-capacity transit. It's not like he's the first person to propose pneumatic or vacuum tube transport (these ideas go back at least to the 1950s), and his main idea for making tunnels cheaper is to make them too small to carry many people.
It's good to get people excited about new things, but not by telling them that actually workable solutions are boring so that you can hold out the promise of exciting transit that will actually just require continuing to buy cars from his car company.
Ok but do you actually think he's meaningfully detracting support from projects like high speed rail? Was Vegas ever really going to build an efficient transit system under their convention center? It seems to me that, being Vegas, the project was always intended to be an attraction. And even still, the tunnel was completed on time and for a very reasonable price. If anyone wants a tunnel that size, there is now a proven, inexpensive option. That seems like a minor win for American civil engineering to me.
It seems to me like you have an unnecessary zero-sum attitude about this. The idea that Musk is "preserving the market for automobiles," as if that's something that he even feels a need to do, seems a little excessively paranoid. He's making a company that digs tunnels. The Boring Company is an independent for-profit entity from Tesla, they will surely dig a tunnel for whomever is willing to pay them to dig a tunnel.
I don't trust Elon Musk as far as I can throw him, at least on non-rocket topics.
I doubt there's a functioning humanoid robot in Tesla's near-term plans. If there is, I am fairly sure it's a bad thing. In that sense, I can only assume that Mr. Musk is deliberately cultivating a minor risk to head off a major one.
Well, if he can develop the functional humanoid robots, then he can get them on the assembly lines making Teslas, and then maybe the delivery backlog can be cleared:
I have been looking into grad school for mathematics, in particular with the goal of eventually doing research. It looks like the job market for research positions is incredibly tight right now, but a large part of me wants to just study mathematics anyways. There are countless articles talking about how job prospects for pure math PhDs are actually okay, but most of the time they bring up examples of big data and marketing analytics, which I have something of an ethical opposition to. So I figured I would ask: What sorts of jobs can you get with pure math, other than data analytics?
FWIW, my team lead at Google had a PhD in pure math, and if I recall correctly he got hired right after completing it. I don't know if he learned to code before or after it though.
That was what I did (probably not on your team though). It was a bad choice - you should start out somewhere where the math gives you at least some advantage (not that you'll be doing abstract algebra anywhere non-encryption, but there's lots of places where good intuition is useful). Google is bad both because it has no room for that (99% of the work is writing protobuf pipelines), and because it's a bad learning environment (the typical response to "I don't understand X" is "read the code/git good")
I took a pure math PhD (mathematical physics) but ended up as a health policy analyst. Nothing compares with pure math with respect to engaging the mind, but I do feel like I often work on hard problems across a wide range of areas, and generally feel like I'm contributing to the greater good. I do work with big data, but also design complex randomized trials and have opportunities to learn and apply economics, psychometrics and medicine. I work with doctors, scientists, economists, and psychologists, almost all of whom like to learn new things that will help people. I also earn multiples more as a consultant than as an acadmic, and have been paid to travel to Europe, Asia and Africa just to talk to people about what I do. So while I miss mathematics quite a bit, I feel pretty good about where I landed.
I second most of what previous commenters said, and would like to add/reframe just a few ideas:
1. When I started my math phd, I was quite sure that I wanted a career in pure math research, but I underestimated how difficult it would be to keep a long-term interest into questions that are so theoretical (I studied algebraic geometry/mathematical physics). The math community can be a bit inward-looking, and it turns out that I'm much more interested in playing around with up-and-coming technological tools; I'm more tinkerer than thinker. Still, I think that pure math phds are still worth it, for those who are genuinely interested in the subject -- on the off-hand chance that your projects work out and you maintain your interest, math research is one of the most intellectually rewarding careers.
2. Regarding job prospects after the phd, I used to believe in a dichotomy between pure research and selling your soul to wall street or facebook. Luckily, the spectrum is much more continuous. Like many others these days, I ended up in bioinformatics research, which I really enjoy. To me, one particularly appealing feature of this field is the existence of long-term academic positions other than such-and-such professor: bioinformaticians and research scientists who get to do the fun tinkering without worrying so much about people management or grant applications. For those who like the culture and intellectual freedoms of academia, but want less involvement in the associated management and politics, this is an option worth exploring.
3. Related to 2, there are ways to hedge during your math phd, by working on pure areas which are close enough to applications. Others already mentioned cryptography and formal verification. Mathematicians more interested in geometry/topology could check out topological data analysis, or the recent applications of gauge theory to obtain convolutional neural networks which are equivariant w.r.t rotations or other symmetries.
4. Anecdotally, I agree with Elena that a math phd has a lot more signalling value than a math master's. In practical terms, some research scientist jobs in both academia and industry are gated behind a phd requirement. In industry these kind of jobs are a minority, but they're likely the ones where you get to do the most interesting work.
I may have a somewhat unique perspective relative to the commentariat here, in that I enjoyed the process of getting my PhD (in pure math). I left to industry halfway through my second postdoc, partly because I didn't have great postdocs (and didn't have enough of my own research program to thrive despite that), and partly because I wanted to settle down in a particular location and would rather do industry things than teach. In pure math (possibly unlike any experimental science ever), I found the graduate school experience to be "not a lot of money in exchange for not a lot of work," and would have considered stretching it out longer if a scholarship were available. (I'm from the US, but for Reasons did my PhD in England, so the standard US option of "you will teach some calculus classes in exchange for tuition + stipend" was not available.)
Back In My Day (PhD 2012), the exit route was finance first, then data science. This may have flipped. Also, data science isn't quite equivalent to marketing, e.g. I worked for a logistics company, and some -- though certainly not all -- of what the group did actually involved doing the logistics better. I've also seen manufacturing companies where the data science group is involved in process control.
I currently work in an engineering outfit (my title says Something Something Engineer). I think this is not terribly uncommon, although many of the opportunities will be tied to the Department of Defense. Of course, there is also the NSA, and people I know who worked there have said good things about the work environment.
A couple people recently mentioned quantum computing to me as their (actual or potential) exit strategy.
Google (I think in the form of straight-up software development) was an exit route for a few people I know, with varying success.
Bell Labs used to be an good exit opportunity if you could wheedle your way in, although I gather it's less of a "math department minus the teaching" now than it was in its heyday. There should be at least a few similar industrial research groups out there (e.g. Google, Amazon, IBM).
I will reinforce everyone who mentioned programming experience being very valuable outside of academia. I think you do get a salary premium for having a math PhD, but in most places it's essentially a monetary equivalent of the "wow you have a math phd you must be so smart".
This is good, and mostly matches my experience. I will advise against Google specifically - they're not good at accommodating people with academic but not direct work experience, and it can be a pretty awful environment unless you have an unusually good manager.
I did a math PhD and now do bioinformatics. It's not a super common route but there are a lot of people with pure-math backgrounds that have ended up doing well in this field and the job market is considerably better than in math. (That's a low bar to pass though!) The other advantages of bioinformatics are that it's still involves math and that it's very broad so you can do interesting, different things every day (eg: attend a biology talk, do software development, and read a stats paper) and it's still possible to do academia if that's your thing. The downsides are it's just nothing like doing math research and a lot of what we do is incredibly basic, like cookie-cutter stats 101 and making a few plots. Also you'll be frustrated at the number of downright incorrect papers that get published in this field, with just absurdly bad statistics that somehow get past reviewers unscathed. Maybe it's just my niche, but it seems like all the statistical methods papers out there mess up one way or another but no one cares.
Echoing Luke G, make sure you really know some software development, and not just at the "I took an intro to Java course eight years ago" level. It'll make your life easier in nearly any job you could end up in later that could use your PhD. If you don't like computers, don't do bioinformatics.
I'll add again the standard advice that a lot of people don't end up enjoying their PhD and if it's not really going to directly help your career, you should consider not doing it. I had a point where I did not enjoy my career, where I struggled to transition from classes to research work and I might have dropped out then. Luckily my advisor put me on a tractable problem and I made some early progress which was encouraging and got my back on track to liking what I was doing. Many other students are not so fortunate or pick projects that are simply too difficult. Many of those drop out. Many also don't even make it to that point. Especially if you're thinking of non-math jobs afterwards, pick a research topic that won't have you stuck for years, even if it's less sexy - no one will know/understand what you did anyway afterwards. You can work on that millennium problem in your spare time, but make sure you're making progress on something else too.
Math PhD programs, at least at fairly good programs, won't help you get non-math jobs, generally speaking. Expect them to be basically blind to the entire concept despite half their graduates going that route. Be proactive, since no one will be active for you.
I studied pure math, too--not a PhD, though I do know several PhDs as well. It can be great for career opportunities, *but* you really need to know computer programming, too. Most companies trying to fill quant-y roles are looking for general problem-solving ability rather than domain knowledge--if you have a solid base in math and CS, you can pick up most of the domain knowledge quickly. Also, everyone knows math majors are the smartest ;)
The most lucrative careers for math PhDs are usually in software and finance. The hot fields in software change every few years (AI, machine learning, big data, etc.), but a good math and CS background will make you well-equipped for any of them. In finance, there's a spectrum of different quant roles who love to recruit PhDs, and many traders and structurers also will recruit quanty people.
There's a pretty good diversity of roles available, and so you'll have some ability to make tradeoffs in work-life balance, amount of CS required, and the type of work--although it may take some amount of exploration before you really figure out what's the right balance for you. (I definitely had to revise a lot of beliefs about myself after a couple years on the job!)
A few assorted points to consider:
* Although a PhD is a unique opportunity to do research, it comes with a large opportunity cost and I know several people who seem to regret it--the research doesn't go as well as hoped, the social experience isn't as fun as undergrad, and you'll be a few years behind in your career. So make sure you're confident about loving research!
* Definitely take a few CS classes to maximize your opportunities for industry jobs. Even better, get involved in a software project.
* Consider taking some econ classes, too. Chances are it'll be embarrassingly easy for you, but the knowledge is very useful!
* I'd encourage you to keep an open mind when considering careers, including your assumptions of the ethics involved--a lot of media/social media portrayal is extremely misleading, and a lot of people in academia have misconceptions about industry.
* A summer internship in industry is a great way to pad your resume and help you figure out whether that industry is right for you.
> Chances are it'll be embarrassingly easy for you, but the knowledge is very useful!
Re-reading this, I should say I don't mean to belittle the field of economics--I have tons of respect for it and wish I studied it more. I just meant that typical Ec 101 curriculum will be pretty easy for someone with a strong math background.
As someone far too many years into a Physics PhD: if you love the field so much that you want to spend your life doing research, and a job is a way to avoid starving to death and dying of exposure in the meantime, then academia is right for you! If you are at all concerned with standard notions of "good job prospects", you don't want to go near academia, you're looking to convert to industry.
In that case, a Masters can be a decent investment, but PhDs are basically never worth it - at best you're trading a couple of years of salary for a lifelong sense of superiority, but actual payscales don't value it higher than working a job in the field, IIRC the studies. People who could do a PhD are very smart and get paid well, but the PhD itself isn't adding enough to justify the cost in time, and health.
In the case of mathematics specifically, all the money at the moment is in the overlap of maths knowledge and programming knowledge, so you will need to learn some coding. Firms generally recognise that it's much easier to teach a genius mathematician to be a functional coder than to teach a coder graduate level maths, though, so do still major in Mathematics (or Statistics) and treat coding as a secondary skill for now - unless, of course, you discover you just love doing it :)
Unless things have changed since Back In My Day (PhD 2012), in pure math having only a masters is a sign that you were going for a PhD and dropped out. There are a few exceptions where the masters is a well-known program (e.g. Cambridge Part III) which you're doing as Undegraduate+ rather than as PhD-, but in general, a masters in pure math signals that you dropped out of your graduate program. I don't believe this to be true of adjacent disciplines, e.g. data science or computer science or statistics.
That sign is well-known within academia (for any discipline that doesn't have an established non-academic career trajectory for masters degrees), but I don't know how well-known it is outside academia.
I dunno. I have a good friend who is a physics PhD working in the microchip industry. His division is very technical and he only hires physics PhDs. FWIW.
To echo this, I have my PhD in Physics and work in optical comms; your career will eventually cap out if you don't have a PhD (albeit at a relatively high level that may be perfectly acceptable to you), and a PhD will both open doors to you that may otherwise have been closed AND make it much easier to get your first job. If you don't want to eventually run shit, a Master's is fine, and you should be very, very careful to properly weigh the reduction in quality of life you'll experience during the extra years you're in school.
You can generally get a SWE job (your PhD gets your foot in the door, but you will have to learn programming stuff on your own time and it's a hard transition). Personally I'd recommend going the quant finance route - you'd have better comparative advantage and they're better about receiving mathematicians. Not sure how your ethics feel about that though.
FWIW I feel okay about having done a math PhD and then done a career change (my first job after grad school was pretty awful, but things improved after that). It can be an interesting thing to do that leaves you reasonably well-qualified for other jobs later, but OTOH if what you really want eventually is career advancement in some specific field better to just go straight for that.
I've worked in quant finance and I've worked in big tech (Physics PhD here) and I think the good times in quant finance are already over, you're likely to make more money and have a more pleasant job in big tech.
You should really, really, REALLY like math research to do this. And even if you do, a lot of math culture is an inward facing status game. Talented people somehow seem to get into the pure math track even when their skills provide a lot more leverage in practically every other research field.
What are some good research fields that I should go into instead? I have looked into economics, computer science, electrical engineering, and law, and while all of them seem enjoyable, first I have to get into the programs without the relevant degrees. Noah Smith has written about why an econ phd is a very good investment (also much of econ seems to be written in the same "language" as the mathematical logic that I study), is this something that can be easily pivoted to? Is doing something like that a wise decision?
I have an econ PhD. Being good at math is vastly more important than being good at undergraduate economics for econ PhD programs. I suggest you read intro and intermediate textbooks in micro and macro economics if you want to do this.
Sorry. posted too soon. In computational biology you'd be bringing math skills to bear on a field that doesn't really like computers (or last least views them as annoying necessities) and isn't in general that mathematically inclined. Also, given our strange biomedical times, there's good reason to think the biomedical research funding complex is going to be comparatively richer than funding in other fields going forward.
Having studied both, I think Smith is largely right to be long the economics Ph.D, especially over math. If you have good quantitative skills, the other one I would mention is computational biology.
If you're Yitang Zhang, you can use your math PhD to get a job at Subway.
(Less jokey response: if you can get someone to pay for your PhD, I say go for it. Just realize that you might end up a high school teacher thinking about logic in your spare time. Which strikes me as being much better than an adjunct lecturer thinking about logic in your spare time.)
Stories like Yitang Zhang are what keep me awake at night.
On the other hand, though, he seems to have had exceptionally bad circumstances to end up working at Subway. Growing up under Maoist China and all of the personal feuds with his advisors certainly didn't help.
I'm primarily interested in mathematical logic, especially from a proof theoretical angle, in addition to the rest of the set theory/model theory that people study in logic. Ideally the goal is to be a professor, and that is what I would go for, but I just want to think about what the fallback plans are assuming things work out.
For mathematical logic, you'll want to do some further investigation of placement records of PhD programs. In the early 20th century, mathematical logic was seen as central to the discipline, but in the past 50 years or so, math departments have treated it as a backwater. Most math departments, even at research universities, have 0 specialists in logic on their faculty, and aren't considering hiring one. This is quite different from philosophy and (I believe) computer science departments, where most research universities have at least one specialist in their department, and will try to replace them if they retire.
There are definitely areas of CS where mathematical logic has a lot of overlap. Program correctness, model verification, etc. are niche but growing fields--software is becoming more complex and more relied on, so it's increasingly valuable to formally verify it works to spec. Programming language theory is also related to logic, and I imagine can lead to industry jobs working on compilers and other programming tools.
People in proof theory can get jobs in crypto. There is a pretty big overlap between some of the stuff in zero-knowledge proofs and regular proof theory. In general, proof theorists can easily find work in computer science, and how far they are willing to go from theory pretty much limits their success. Make sure you get the very best advisor possible. Your advisor will get, or fail to get, your academic position. Everything depends on lineage, as a huge amount of material can only be transmitted through in-person contact.
Computer science seems like a reasonable alternative path, in particular because automatic theorem provers seem very interesting. Good to know about cryptography, as well. How does one make sure that things like a job in crypto are still in the cards after their PhD? Do you need to do side projects, or is it mostly sufficient to just do proof theory during grad school?
Note though, that ZKP employment will be ~entirely in crypto startups, some of which may come across as questionable. The more well established tech firms have virtually no interest in ZKP theory. MSR does fund it but I doubt their research efforts there are growing.
I considered a math PhD and chose not to do one ... Big Tech companies are so much more profitable.
Apart from the NSA, I'm not sure who hires math PhDs other than universities. I'm sure somebody does, but it probably isn't who you want to be hired by.
It's not the optimal way into big tech. On the other hand, the average PhD in big tech is probably working on more interesting problems than the average non-PhD so the hit to total lifetime earnings may be worth it.
Maybe it's naive, but I've been pretty surprised by how openly pro-war and pro-occupation much of the American elite class has revealed itself to be in the last week- especially our media elites. While I am normally not particularly enthused about the Greenwald/Taibbi left, I have to say that they do have some excellent points regarding how pro-national security state institutions like the NYT, WaPo, CNN etc. are. They are just openly clamoring for us to stay in Afghanistan, and against withdrawal. I guess I kinda vaguely knew this to be true, but their open pro-war cheerleading in the last week has been a bit astonishing. Makes me wonder how the inevitable war with China over Taiwan will go.
I would like to gently poke at people on the right who are convinced that the media establishment is uniformly leftist. When I was growing up, in a Nation-reading highly left-wing household, a huge part of the Left for us was being anti-war, any war, all wars. (I'm not saying that those are my views now- for example I was taught that the original Gulf War was Bad, whereas I now think it was justified- but I think being anti-war is like a core leftist value). The more complex/nuanced view is that the media is a series of elite institutions with its own worldview, which is mostly quite socially liberal, but not uniformly. I mean, they were in a constant state of hysteria about Trump for 5 years because it was great for ratings, which I know a lot of the Right confused with actually being anti-Trump. They loved the guy! You see all these stats about how much cable news viewership has declined in his absence, etc.
I see this almost uniform media elite consensus that leaving Afghanistan is Bad to be a pretty bad thing for Biden's approval rating/the midterms
I read the New York Times pretty regularly (usually about five or six articles before coffee, and various others throughout the day) and I don't have the impression that they are "clamoring for us to stay in Afghanistan". I see a lot of big headline articles talking about ongoing problems with the evacuation, but they don't seem like the ones I remember from 18 or 20 years ago that were actually pro-war. They just seem to be following the journalistic convention of complaining about whatever is going on that seems to be bad, without any clear sense of what alternative they would favor.
Yes, there is a lot of that in the media, isn't there?
But would you favor the media recommending fully vetted policies beyond the standard opinion pieces (think "The Economist")? I think this actually might turn people off when the realize the newspaper has a desired policy outcome.
In this case, I don't think the media is acting as an elite institution, I think they're just following a very easy incentive curve.
The footage coming out of Afghanistan looks bad. We've got a hurried and seemingly ill-planned evacuation going on and a bevy of "this person/organization will be hurt by the Taliban's rule" stories that are almost trivial to report. These stories are easy to write and will therefor get engagement. I don't think this represents some kind of stalwart leftist consensus that war is good. I think this is just the immediate response to the negative images/stories coming out of Afghanistan that will fad quickly with time.
I predict with high confidence ($100 bet to charity if anyone wants to take me up on it) that by the time the midterms roll around it will be seen as a point of favor on the left that Biden got us out of Afghanistan.
They’re following the curve because they also personally don’t want to pull out. They could instead show the horrible images and say “look how bad it was the entire time! Thank god this is the last of it! We should’ve been out years ago!”
I agree, this is not about anything more substantive than seeing images which evoke an (extremely temporary) visceral reaction. The public did not care and was not interested in what went on in Afghanistan the previous 20 years, and as soon as the pull-out is over, they will revert to not caring. Right now they just feel bad because they're seeing emotional footage.
By the same token, if the media regularly broadcast footage of the real, brutal, grotesque reality of all kinds of common-place, ordinary things that go on every day, people would be up in arms (whether that be graphic footage of what goes on with people dying of Covid in the ICU or what goes on to get your food to your table or any number of other things we really prefer not to see or think about).
Hmm, my predication was focused specifically on the how opinion among the American left. I'm not confident making any prediction about how public polling will shape out.
I would be fine with the any of the following terms:
A majority of registered Democratic Voters (50%+) will support to decision to leave Afghanistan by the time midterms come.
No action to censure Biden over his Afghanistan pullout or reverse said pullout takes place in congress that receives significant support from the Democrat Party. (Significant here described as support from Democrats sufficient to render said action successful.)
No Significant portion of the 2022 Democrat Candidates for election will support action to either censure Biden for his Afghanistan pullout or reverse said pullout. Significant here described again as constituting a majority when considered with Republicans who hold similiar beliefs.
You can make it coherent by re-casting it like this: the left is anti-war when the wars are against utopian leftist regimes that are attempting to remake society. They are pro-war when they perceive the enemy as a conservative regime that's attempting to hold society back.
OP literally gave the Gulf War as an example of a war that his left-wing family opposed. I don't think Saddam Hussein's invasion of Kuwait could be considered a utopian attempt to remake society.
I would associate nation building with a progressive world view, akin to domestic social engineering, so I don't think there's anything surprising about left-wing media being in favour of it, especially considering the regime that is replacing the occupation government. But there are – of course – many ways to reduce the political landscape to a single dimension, and we should just do PCA instead of being all subjective about it.
> I would like to gently poke at people on the right who are convinced that the media establishment is uniformly leftist.
A lot of these debates are affected by that there are several different axes of political views, and people variously use the labels 'left' and 'right' to refer to one or another of them. The sides on the various axes form loose coalitions, but these coalitions shift.
IMO much of the time rightists complain about these or those "establishment" institutions being dominated by the left, the more accurate claim is that they are dominated by the left on matters of race, sex/gender, sexual orientation, gender identity and—to a less complete extent—immigration, religion, and party politics. But not necessarily at all on other matters, such as economy or militarism/pacifism.
I think the US political establishment is globalist more than localist on both sides, and so it makes sense that the establishment media would lean towards the left-globalist corner (Wilsonian, in Mead's characterization). The anti-war left is solidly in the left-localist corner (Jeffersonian, in Mead's characterization). You can look at any war we've been involved in and see how people from different points on the diagram influence the resulting policy.
The first Gulf War primarily was a right-globalist effort (Hamiltonian), but there were things to support for the left-globalists (Wilsonian) in the legitimacy of the UN and to the right-localists (Jacksonian) in demonstrating US power to deter other wars; it's the left-localists (Jeffersonian) that most opposed the war. While there was agreement from three of the sides regarding the war, that agreement didn't stretch to what to do when the war ended, which is why the response afterwards sowed the seeds of future conflict.
But I'm more anti-war cosmopolitan, which is globalist I guess. (I kinda wanted them to leave some forces in Kabul, but focused on defense, not offense, to maintain a pro-Western zone where women can be educated and stuff).
Two axes isn't enough. The other popular "second axis" is statist-libertarian...which again poorly captures my way of thinking.
The axes are limited by being defined by historical people that have either been President or have been close enough to the presidency to define US foreign policy. They are useful primarily in their predictive capacity, which mostly lies in defining a second axis able to describe US foreign policy beyond right-left (for example, while Ron Paul and John McCain were both on the right, they both had very different foreign policy preferences).
That being said, it's a spectrum of general policies, not a straight-jacket. There's plenty of space between 'the US should always stay out of other countries business' and 'the US should always act as policeman at the behest of the global community'. People also holding both of those positions near the extremes (and anywhere else on the spectrum) can rightly view themselves as 'anti-war'; at the same time, neither extreme is necessarily strictly pacifistic.
I agree - there are at least three axes of politics in the Anglosphere (socialism/laissez-faire, authoritarian/libertarian, "woke"/traditional) and that doesn't even cover international relations very well.
technological progressive/regressive, pro/anti militarism/violence are two others. There’s a lot of them! There are many relations between them (and often not just tendencies towards one direction but more complex ones)
100% this. I do wonder in this specific case if the warhawk stuff became left-wing because Trump pushed the Republican position to isolationism, and hell will freeze over before the 'Left' and 'Right' are allowed to agree on anything, never mind if this means one side has to take a position completely at odds with everything they've been saying up till yesterday
Could you link one of the NYT articles (ideally not from their Op-Ed section)? I specify not Op-Ed, because my impression (from the WSJ op-eds) that those people are much less so part of the "media elite."
Also, outside of the media, which "elite class" are you referring to? My impression (once again from WSJ's reporting, so could be biased) is that most senators/representatives are condemning to dubious of the execution of the extraction, but largely not too harsh on the decision to pull out.
To a certain extent, it is Yarvin's "cathedral" that is advocating for a permanent War in Afghanistan. The deep state and the true believers in American power want to continue to project that power; if the US had abandoned Ashraf Ghani but held onto Bagram AFB the Taliban would have been unable to move against it.
This is somewhat ironic, since Yarvin wants to replace the cathedral with a new entity which could do a permanent War in Afghanistan, only effectively (with Battle Royale-style explosive collars or something).
Well the $50,000 question is whether the Afghanistan charley-foxtrot makes the Chicoms finally decide to settle things with Taiwan. Perhaps not right away, as even Joe Biden couldn't sit still for that, but after a Decent Interval. That would be an exceedingly dangerous event.
Increase. In the short term, obviously the US involvement in Afghanistan is over, so it's a temporary decrease. In the long term, Taliban is likely going to start new wars. At the same time, the utterly botched US withdrawal from Afghanistan will turn the public more hawkish; I doubt we'll withdraw from anywhere else anytime soon.
Whom do you believe the Taliban will invade? Turkmenistan seems the only vaguely-plausible candidate - the PRC, Iran and Pakistan are way, way out of their weight class, and Uzbekistan/Tajikistan are in a mutual defence treaty with Russia.
I agree with Melvin, below: it's not so much that Taliban will officially invade anyone, but rather that they'll conduct and/or aid and abet a continuous stream of terrorism on multiple targets.
Just some back-of-the-napkin numbers here, but the Taliban would have to do something like one and a half 9/11-size attacks a year to match the average yearly death toll of the war in Afghanistan.
OTOH, empirically it takes about one 9/11-size attack in 20 years to create the average yearly death toll of the war in Afghanistan. I don't see why we should expect future attacks to be far less deadly.
Is Pakistan way out of their weight class? A good deal of the population and government are sympathetic to radical Islam. Also, Pakistan has its own Taliban, which may join forces with the Afghan Taliban to try to capture the Pakistani government.
These don't seem like war scenarios so much as Pakistan's own democratic process (or lack thereof, in the case of either a pro-Taliban or anti-Taliban coup by the Pakistani military).
"Invasion", to me, means "one state militarily attacking another with the objective of territorial control". Pakistan's military is of the same scale as Iran's (slightly ahead in some ways, like nuclear weapons); its monopoly on force is not seriously threatened by any Afghani action, even taking into account possible domestic collaborators.
I don't believe the Taliban can or will actually invade anywhere.
But I do believe that a Taliban-run Afghanistan is likely to become a safe haven slash training ground for violent Islamist movements that will start carrying out violent acts in... well, just about every other country on Earth with any sort of nonzero Muslim population. That's what happened last time the Taliban controlled Afghanistan (remember why we invaded in the first place?) and also what happened in the few years that ISIS managed to control a chunk of territory.
To me the bones (not the proposed culprits) seem obviously right - it's got that nice feature of correct theories that it explains a whole bunch of niggling weirdnesses in a satisfying way. That said, I've been burned in this idea-space before, so if there's good pushback somewhere I'd like to read it.
It seems like if this theory were true you should be able to find some really sharp discontinuities between adjacent areas that source their water very differently. In particular well (especially deep aquafer wells) vs. local rainfall vs. distant surface sourced water etc.
For example in the SF bay area there are several different water districts. There are areas where just going across the street gets you from water drawn basically directly via pipes from catchment basins in the Sierra Nevada's to areas served by the state water project where the water went through hundreds of miles of rivers and canals and resvoirs to areas served by local wells all with very little mixing or pooling of the water. We should in theory see pretty dramatic differences by water district boundaries in that sort of situation and I don't know that we do (if we do that would be great evidence). Southern California has some similar comparisons.
I thought it was fascinating, but he seemed to dismiss most of the diet and exercise aspects with arguments like "subjects only lost 5 pounds in 6 months." It seems like if you make a couple of those changes across a fraction of the population for the long term, that adds up to a meaningful share of the change.
Sure, but is that because they actually aren't effective or because people stop following them? I've only done some casual reading on the topic, so I really don't know.
This question is difficult because it feels like those should be different things, but in practice the distinction isn't so clear.
Imagine a drug came out that would reliably cure/prevent cancer if taken regularly, but studies show that 90% of people - even terminal patients - stop taking it in the short to medium term due to intolerable side effects, which get progressively worse over time. Is it accurate to say this drug cures cancer? In a technical sense, sure, but in practice that phrasing seems misleading - certainly we wouldn't be confused about why people keep dying of cancer when all they have to do is take a pill to prevent it.
The case with dieting is actually worse than that, because some appreciable percentage of people will gain back the weight they lose even if they do stick to the diet. If you're trying to maintain a weight well below your set point you'll have to get stricter over time as your body gets more aggressive about compensating for the perceived famine. And your "eat, fool" circuits are waaaaay down deep in your brain - they can smear your prefrontal cortex against the wall if they have to.
Put differently, a diet that 90+% of people don't stick to it has demonstrated itself to be incompatible with human physiology in a way no less serious than if it was ineffective due to not successfully interacting with the intended receptor. It looks different because will-power can temporarily override the physiology in the case of diets, but ultimately will-power will bow to its masters in the brain stem - you can't hold your breath until you die of hypoxia, and you can't diet yourself to well below your set point longterm.
There was a lot of justified pushback on r/SSC and DSL, correctly so. Lots of correlations and claimed associations and little real evidence. Maps that look similar, bunches of n=4 studies smashed together and taken at word, and above all lots of correlations where everything is correlated already, causal or not, etc. that said it’s still a lot of effort and interesting stuff put together
This article argues that the part about watersheds is wrong: https://nothinginthewater.substack.com/p/contra-smtm-on-obesity-part-2-watersheds. The argument: In the US, the watershed map is the same as the map of African Americans (mouth of river basin -> larger plantations -> lots of slaves), and race could correlate with obesity for any other reason. Similar argument that it's confounded by socioeconomic status. They further argue that the map of river basins in China does not support SMTM's conclsuion at all (Xinxiang is very obese for racial/social/economic reasons, and the rest of the obesity map doesn't actually line up with river basins well). The author promised to write a followup post against SMTM's altitude argument.
(Personally, as usual with this subject, I can't tell which way is up.)
I definitely found the China map unconvincing, but the US map had my jaw on the floor and is a major point in SMTM's favor, so this pushback is excellent!
That said, I think CT is overselling the degree to which race can explain that obesity map. The three states at the mouth of the Mississippi, sure, but YOU CAN TRACE THE F***** MISSOURI, MISSISSIPPI, OHIO, AND TENNESSEE RIVERS ON THE F***** COUNTY LEVEL MAP (I actually deduced the existence of the Tennessee river from that map without knowing it was there). It's one of the most stunning pieces of evidence for anything I've ever seen, and it doesn't match the race data nearly as well as it matches the river basins.
I noticed that their state-level plot is from 2018 and the CDC now has the 2019 up. They look somewhat different and frankly they don't match the watersheds all that well. WV, Alabama, SC, Indiana, Michigan - are these large watershed states? Some are part of a large watershed, but at the top not the bottom so inconsistent with their theory.
They said West Virginia is a distinct case, and it's quite reasonable to have a couple cases that are separate from their main thesis. I haven't looked at the others to see whether it lines up (northern Alabama would be in the Tennessee River watershed for instance, while southern Alabama would only have shorter and smaller rivers).
Population density would be a confounding issue there, wouldn't it? Those rivers have more towns along them than the surrounding countryside.
On the other hand, that county-level map (which I missed on my first reading) really is quite striking. I would like to see more than two other countries, though. Thinking about countries with significant population at both high and low altitude, what about France, Germany, Russia, Canada, Chile, Peru, Mexico?
Actually I just looked up obesity maps of Mexico and Peru and they do seem to support the trend too, with low-lying areas consistently fatter than mountainous areas. I do wonder whether it's as simple as the fact that walking up and down hills all day burns a lot of calories, though.
If I understand the argument correctly, it's well known that lower elevations are fatter than higher, but it's generally attributed to hypoxia, while SMTM is arguing that it's a side effect of lower elevations being at the foot of watersheds.
The Rio Grande and the Colorado river of the west are both heavily evaporating rivers and get lower flow as you go downstream, and thus would be expected to behave differently from rivers like the Brazos and the Colorado river of Texas, which are heavily agricultural and get more flow as you go downstream.
I've called it the "phlogiston" theory; a decent amount of circumstantial evidence but overall relying on a theory of a particle that may not exist (and none of the specific options suggested are particularly compelling).
At a high level, it seems to be understating how modern hyper-processed foods (Oreos, Diet Coke, etc.) contribute to obesity. Of course, any explanation for animal obesity wouldn't be based on that, but ... there are lots of possible explanations regarding domesticated animals, and I am not so sure that the "trend" holds for wild animals.
It understates how hyper-processed foods contribute to obesity? On the contrary, it may explain *why* hyper-processed foods contribute to obesity - because processing contaminates the food.
Somewhat related, the Peter Attia podcast had a recent episode that I found somewhat fascinating for the discussion on the differences between lab mice and wild mice: https://peterattiamd.com/steveaustad/. Basically, typical lab mice are so wimpy and fat they can barely be considered mice at all, and I imagine this is somewhat true for domesticated animals in general.
I mean, a bunch of theoretical particle theories have panned out...
The hyperpalatable foods theory has a couple sticking points for me: for one thing I'm fat and it's definitely not because of processed food. I eat too much in general, but I've cut out the gluttony in the past, and while I can straightforwardly lose 20-30 pounds I'm still substantially overweight - for some reason my set point is 245 lbs. And testimony from the anti-diet space gives examples of people winding up in the ER with organ failure despite still being kinda chunky (that particular claim was from Reagan Chastain, who should be trusted as far as you can throw her, but still). This calls to mind Guyenet's anecdote of the obese mice starving to death while defending body fat stores higher than the average mouse has during normal life - in both cases something has gone badly wrong with the set points, and I really don't see how hyperpalatable food explains this. Throw in the lab animals getting fatter on the same food as 50 years ago and the theory is on life support.
> And testimony from the anti-diet space gives examples of people winding up in the ER with organ failure
I mean if you ate 1500 kCal of HFCS or chocolate fat free smoothie per day, or something “healthy” like an olive oil / kale diet cleanse, I could totally see that causing organ failure. I can’t see organ failure being that common if you eat 2kCal/day of a healthy balanced natural diet or even burgers fries and some lettuce or soylent or something.
Also, if you watched the Rogan debate where Guyenet crushed Taubes, the only unanswered point I remember Taubes scoring was when he hammered Guyenet on a population that had obese mothers and starving children. How a population like that comes to be has haunted me ever since - SMTM's theory explains it.
I'm looking for recommendations of little-known but truly excellent songs from the classic rock era. These would be songs that – in some alternative universe – would have been smash hits, but for the flapping of a random butterfly's wings in some remote corner of the world. Thanks!
Blackburn & Snow were a popular boy/girl early 60s folk-rock duo whose only album was not released until decades after they broke up. “Yes, Today” is my favorite of their songs - might have been a hit if it had come out when it was supposed to.
> We can start with his writings on music, since that seems to be what he is known for. He has helpfully ranked the best 100 rock albums of all time in order…
> If that’s too broad for you, he also provides his top albums year by year … every single year from 1967 to 2012. He also gives genre-specific rankings for psychedelic music, Canterbury, glam-rock, punk-rock, dream-pop, triphop, jungle … 32 genres in all. Try punching “scaruffi [band]” into Google; I defy you to find a major musician he hasn’t written a review of. These are all just part of the massive online appendix to his self-published two-volume history of rock music.
I just looked into that page... it may be a good pointer to semi-obscure albums, but he seems to belong to a particular kind of avantgarde underground contrarians. When someone suggests "Trout Mask Replica" as the greatest rock album ever, there's something wrong with them. Also, I don't have the knowledge to judge all of his best-of lists, but on the ones where I do, I can say he makes some bizarre choices. Listing Cream and the Animals as milestones of progressive rock, while skipping the Beatles and Moody Blues - no. Just, no. Charlie Watts (RIP) at number 3 of the best drummers? Pleeeeease. Klaus Schulze with "Irrlicht" for the best rock album of 1972? That was not even a rock album at all, by any stretch of the imagination. So, while checking out his recommendations doesn't hurt... - no, scratch that. Trout Mask Replica does cause pain and discomfort. Check out his recommendations, but at your own peril, and don't be frustrated if you don't like any of them.
Heh, thanks for looking into it. I'm not really familiar with rock, so I can't really judge. But the ones I did check out I didn't like much either.
Luke writes that "In general, the greatest albums are not ones that should be listened to near the beginning of one’s explorations of music." I certainly haven't reached that stage for rock then it seems
Or some artists are just not your cup of tea. I have been listening to various flavors of rock and metal for thirty years now, and some of the supposed cult albums still don't do anything for me. Also, rock music should not require a doctorate in music theory to appreciate, at least on some level (and I'm saying that as a fan of progressive rock). Rock should have melodies that touch you emotionally and haunt you, and rhythms that make you want to rock out. Some melodies and grooves work on some people and not on others... but if you can't find anything where you can at least say, "I understand why some find that enjoyable", it may be that it's just not a great rock album. (It might still be a great something-else, though.)
It eventually became popular 30 years later but Pink Moon by Nick Drake was released in 1972.
Rocky Erickson's Gremlins Have Pictures has a 70s rock vibe though it was released in 1986. I think it's a masterpiece. His live version of Heroin really captures that era.
"The classic rock era" is a bit vague, but here are bands and songs that I've come across that may fit your criteria:
- the band Love put out the excellent album "Forever Changes" in 1967 - psychedelic rock based on acoustic guitars and the occasional trumpets and strings. Standout songs IMO are "A House is not a Motel" and "Maybe the People would be the Times".
- Beggar's Opera had a minor hit with "Time Machine" - early prog, lots of mellotron, dramatic vocals... I love it.
- Starcastle sounded a lot like Yes, but maybe a bit more organized. At more than ten minutes, "Lady of the Lake" is an unlikely candidate for a smash hit, but still a great song, and quite catchy for progressive rock.
- the prog version of "Elanor Rigby" by Esperanto may be the best Beatles cover ever.
- in the blues-based corner of classic rock, Ashbury had an excellent album in "Endless Skies", but never achieved breakthrough success. "Warning!" is a good place to start.
- Bob Seger recorded an awesome cover of "Bo Diddley/ Who Do You Love", which AFAIK never got anywhere... but it rocks hard.
It's very creative musically, but they lost the story by stretching it out over 8 minutes.
The singer even tacked the end of verse 3 onto the beginning of verse 2, cutting out 1/3 of the lyrics. It's become an art object instead of a human story.
Kansas also did a prog cover of it (less "prog" and more classical) with the London Symphony Orchestra, on "Always Never the Same", but I would never call it the best Beatles cover ever.
Ray Charles did a good Rigby cover. So did Joan Baez. Paul McCartney also did a cover of Eleanor Rigby, on his album /Give My Regards To Broad Street/ (1984). Not my favorite, but it's followed by a new composition, "Eleanor's Dream", which is worth listening to for the Eleanor Rigby connoisseur. If I had to pick a favorite, it might be Rare Earth's R&B cover of it.
Again, these wouldn't have been hits, and are only marginally rock, but I think they're great:
- Bruce Cockburn, generally
- T-Bone Burnett's album /Trap Door/ (1982)
- Robyn Hitchcock's "My wife and my dead wife" (1980)--the lyrics fascinate me, because the story it tells is IMHO unlike anything else in literature in its intent / function / purpose
Robyn Hitchcock has quite a few great tunes - “I Watch the Cars”, “The Lizard” and “I Often Dream of Trains” are three of my favorites. “My Wife & My Dead Wife” is also excellent, and is somewhat thematically similar to Noel Coward’s play/movie “Blithe Spirit”
It's easier for me to think of songs that I think were "truly excellent" but could never be "smash hits". Most of the repertoire of the midwestern art-rock band Kansas fall into that category. They did later become highly influential on Scandinavian metal, but nobody else seems to have noticed this. Some of my favorite songs by them are "Lamplight Symphony", "Cheyenne Anthem", "Ghosts", "T.O. Witcher", "Peaceful and Warm", "Rainmaker", and the entire album "Point of Know Return". One of their songs, "Miracles out of nowhere", has a 4-part fugue starting at 2:25. It ain't Bach, but it was a sincere attempt to cross rock with Bach.
Similar story for Jethro Tull's album /Songs From the Wood/, which is super-famous with critics, but I don't recall hearing any of it on the radio.
There's a very obscure song by Jonathan & Charles, "Mrs. Chisholm's Weekend" (1968), which is in the same vein as "Eleanor Rigby" (1966), but had the misfortune of being released on a Christian album. There are also a few songs from that period by the Christian rock musician Larry Norman which could have done better in the mainstream, like "I'm the Six O'Clock News", and maybe "So Long Ago the Garden" (1973), which sounds like a missing link between beat poetry and rap music, though I doubt any future rappers ever heard it.
(Take that "highly influential on Scandinavian metal" with many grains of salt. That's just my guess, based on some guitar-solo similarities with melodic death metal. They obviously influenced Dethklok, musically and visually, which is however a fake Swedish melodic death-metal band, which however probably sold more albums than most actual Swedish melodic death-metal bands.)
Frank Zappa is generally more vulgar than I care for, but "Muffin Man" was the song that made me want to get better acquainted. Give it a couple of minutes to actually get going... :)
I'll throw in Blue Oyster Cult's The Red and The Black. It's a banger. The lyrics are about a man fleeting the Royal Canadian Mounted Police but really it's the music that gets you going.
In an alternate universe where a 10-minute rock song could become a smash hit, I think Green Grass and High Tides by The Outlaws would have.
Re. the recent announcement that NYC will provide free education for 3-year-olds in 2023, and the $3.5 "stimulus" bill that passed in the US Senate on Aug. 12, including some large amount for free preschool for 3- and 4-year-olds in America. (I can't seem to find out how much, other than that it's part of a $726 billion allocation, and that an "expert" predicted preschool for 3+4 year olds would cost $60 billion (yearly?)).
Scott, or maybe someone else, has written posts showing that the consensus now is that pre-school for 3-year-olds has no lasting educational benefits.
Now Vox has an article (https://www.vox.com/future-perfect/2018/10/16/17928164/early-childhood-education-doesnt-teach-kids-fund-it) saying that it has no lasting educational benefits, but has lasting health benefits. It says that the studies showing no lasting educational benefits used control groups, while the studies showing lasting health benefits were longitudinal studies mostly without control groups. It suggests that the health benefits were due to the actual health-benefit portion the pre-K education, and maybe the educational part, which is much more expensive, isn't useful.
So it sounds to me like scientific consensus at the moment would say that Congress is throwing a lot of money away on preschool for 3 and 4 year-olds, or at best betting a lot of money on a hypothesis, blowing right past the US budget cap before we've even started on the $10-to-$100 trillion New Green Deal.
The US has thrown away a shit-ton of money on useless pre-school education for decades, it's not going to stop now. It's a lot like "infrastructure" bills to address "our crumbling roads and bridges," which pop-up pretty reliably about once a decade or so. (You'd think at some point the voters might say: WTF? we voted for $x billion less than 10 years ago for those God-damned crumbling bridges, where did it go? Either the bridges should be brand-spanking-new now, and we don't need more $billions, or else whoever disbursed the money 10 years ago should be dragged into the public square and hanged. But they never do.)
Can you link some evidence that they repeatedly claim to repair the same bridges again and again, more often than in other countries? And that repeats on largeer scale - not just with some specific bridge?
Nope. The only evidence is my memory. I've heard this spiel for 40 years, and it's always the same. I've never heard any politician tell me "welp, we fixed all the Interstate bridges in the Northeast last time, it's the Midwest's turn now" or "OK, we fixed up Amtrak last time, they're good to go, so no money for that in this bill."
It is 100% normal that Northeast has bridges with maintenance/rebuilding required in various times.
Bridges are build at different times, requiring differing service intervals.
My city in Poland right now has two bridges in maintenance, last year another serious rebuilding was completed, new railway bridge is under construction. While many other bridges are slowly aging and will require costly repairs in coming decades.
We are definitely not rebuilding all bridges in one year on each century. That would be bad idea for many reasons.
AFAIK this is 100% normal, though some grift/stealing/misspending hidden in this is possible.
There was an article recently (can't remember where it was) suggesting that big federal infrastructure transfers to states just make states spend less of their own money on infrastructure. So the infrastructure spending stays the same but with different state-to-federal funding ratios.
Even more than most "education," preschool is pretty transparently just childcare with some baseline attempt at quality control (background checks, actually interacting with children rather than setting them in front of a device, etc).
The nation probably has some interest in more parents being able to go back to work after having children within 3 - 5 years, as opposed to in 5 - 8 years, and also to have more than one child. Grandparents are increasingly continuing to work until they're too old to do a decent job raising grandkids either, grandkids are coming later, and they often live too far away anyway. Is it a strong enough interest to justify the cost? I'm not sure. Our district is already short elementary educational assistants, and it's not clear we could find the "teachers," even at a decent wage. I'm not sure what the situation in New York is.
I'm somewhat doubtful that government sponsored, standardized, acceptable childcare at that age would be significantly less expensive if they didn't bother with the educational component. The workers would still have to do something to prove conscientiousness, safety, and generally that they were responsible adults strangers should trust their children with, which isn't cheap.
I would theoretically prefer an extended family culture where there's always someone in the house minding the young children, but my revealed preferences contradict that.
Echoing in on the point about how hard it is to find qualified workers for these positions: Currently, to do pre-K childcare you need to both be happy to work with kids, be willing to do so for less money than you could make as a teacher/teacher's aid/someone inside the official education system AND have a clean enough record that you can pass a background check. This trifecta just doesn't exist in most cases.
The expensive component is the number of staff required, though inner city rents can also contribute. Reducing the credentialing required for the staff can reduce costs, though, and I'm doubtful the credentialing has much if any correlation with the ability to teach pre-schoolers (though this part obviously depends substantially on which jurisdiction we're discussing)
Re: credentials for childcare staff, I don't know what the situation is like in the USA, but over here there are more and more standards being set for what children should be achieving.
Here's a link to the standards manual for Early Childhood Education, which covers from Birth - Six Years of Age for the settings
This is the kind of stuff that is being more and more considered a necessary part of children's education. It's no longer a case of "the parents are working, the kids need to be taken care of, for a group of three year olds just let them play with toys, learn how to get on with other kids, and since this is age Biting happens train them out of doing that", the kids and the childcare provider/preschool has to be hitting all these sorts of goals (e.g. as below):
5.2.8 How do you enable the child who consistently plays alone to interact with other children?
5.2.9 In what ways are children facilitated to work together in small groups?
If I remember what I was like at the ages of 4-6, I would have *hated* being chivvied into "interacting with other children" when I was quietly and happily playing alone with blocks or whatever, but we can't let kids be solitary! we must make sure they are sociable! and learn to interact! so they can be productive and efficient workers!
On the other hand, credentialling is here being used as a proxy for social class and overall trustworthiness. Call me a snob, but I don't want my kids taken care of by the kind of bottom-of-the-barrel employees you'd find at McDonald's or the TSA.
It's not "education", it's babysitting so parents can work. See e.g. Bernie Sanders' pitch for universal pre-k - it's like a thousand words about affordable child care, with one throwaway line about unspecified "well established" benefits. Look at it as a socialist program to provide a service to working-class families, paid for by taxing millionaires and billionaires, and it makes sense (whether or not it gels with your values is another question entirely).
In addition to helping out working class parents, I also think there are benefits to the children, not in educational terms but in terms of socialization and normal development. It is really not natural to be cloistered with a hovering adult looking over a kid 24-7 who puts that kid's interests above all others. Kids are supposed to be running in a pack with other kids, learning to socialize and bump elbows and get in conflicts and work them out. Back in the olden days, that could happen just within families or extended families because people had so many more kids -- you couldn't give them all attention. Now it happens with daycare or preschool.
I well remember the first day of kindergarten, how the kids who hadn't been in daycare or pre-school previously were the ones crying in the corner, needing to be comforted by the teacher because they were terrified. The daycare kids were all having a great time. A parent looks out for their kid's own interests above the other kids, and that's not great to be marinated in that situation for 5 years straight when it isn't how society works. Kids that aren't socialized young are much like all the COVID puppies that didn't get socialized -- anxious and neurotic and not as resilient, when they aren't in perfectly controlled and safe contexts.
I am someone who is prone to rather extreme introversion and wanting to be by myself indulging my cerebral interests. If I had been alone in my mother's care my first few years, I would've spent all my time reading and playing alone (rather than just doing that on weekends and evenings) and I'm sure I would have become a serious weirdo with social problems and anxiety. I completely credit being only a marginal weirdo and being able to get along in society with being in daycare in my foremost years, and getting used to having to socialize, and having a minder who did not prioritize my interests either above or below those of the other children.
I broadly agree although I would point out that kids "running in packs" in extended families or communities usually happened within mixed age groups, so that the older/more experienced kids would lead, establish and maintain norms, point out dangers, and guide the younger kids through their exploration of the world. This probably improved outcomes for the younger kids, while giving the older ones experience in leadership and parenting. This is how it was for me growing up around my sisters and cousins.
Also, there are differences in impact between supervised and unsupervised time, and between structured and unstructured time. Daycare/pre-K leans more towards supervised and structured, which is fine to an extent, but I get the sense that kids in general - at home and at school - are getting less and less unsupervised, unstructured time. This is bad because they need this time to develop executive function (by deciding what to do with unstructured time, individually or collectively). Of course daycare/pre-K "curriculums" can build some unstructured time into the day, but now we're getting to the details where the quality of the program matters a great deal.
Yes, I fully agree with this. And even in schools, it didn't used to always be so age-segregated...in one-room schoolhouses, etc., you had kids of all ages learning together and helping with the younger kids. I don't know why we have such strict age-restriction nowadays, with large groups of kids all exactly the same age, which just breeds a lot of competition and status games rather than more natural age-based leadership and social hierarchies that better reflect the adult world.
I had a pretty ideal daycare situation as a kid, where a home-based daycare provider was watching a group of about a dozen kids ranging in age from babies to about 10 or 11, and we all played together and had a lot of unstructured time just running around outside or whatever. It was wonderful and I completely credit it with my social development, which I would not have received at home. I think some years later the state came down on her and required that she implement strict minder ratios and more age segregation, which is too bad.
This is pretty much my take on it too. Those pushing for universal Pre-K don't seem to care if the educational benefits are real or not, but they do care a lot about freeing up individuals (mostly women) to get back into the workforce. They seem to have a strong understanding that babysitting services being offered for all is not as popular or justified as better education for small children. That they can help save some kids from 24 hours a day in really crummy homes may also be a factor.
You might see this as a socialist program, I see it evidence that the values of capitalism, consumerism and materialism are fully baked in to progressivism. This is Peak Capitalism. The state will now take care of your children even earlier so you can get back to the important part of life: being a dedicated worker bee.
As a frequent critic of capitalism, I appreciate the cynicism, but I wouldn't take it for granted that parents would freely choose to raise their own young children full-time even if we lived in some kind of post-capitalist utopia where machines made all our stuff for us. Some people really love parenting (I'm one of them - I'd be a full-time stay at home dad if I had the means, and working from home during the pandemic really cemented that impression, which I gather is not a common reaction), but most people seem to do it because they have to, or because it's what's expected of them, or because they sort of fell into it by default. Many parents need time away from their children, and just because it's often the men who actually get the time away, that doesn't mean that there aren't also women who would rather go out into the world and do something - get an education, work a fulfilling job, or even just enjoy recreation - than stay home with kids for 5-6 years until they're ready for kindergarten.
This is to say, I think even a well-organized socialist society freed from the constraints of a materialist, consumer-driven economy, would provide for some kind of collective childcare in order to give parents some personal time. This would just be a recognition of the fact that raising children is a difficult but necessary job and so society should take on some of the burden from the individual.
The classic statement is that it takes a village to raise a child - but modern people mostly just don't live in villages any more (or in the kind of extended-family communal households that many Bay Area rationalists do have) and so raising a child these days either requires expensive childcare, or the parents being with the child full time, as opposed to the more traditional options.
The question is whether the childcare support is provided by family and friends (the traditional way), by the market (the capitalist way), or by the government (the socialist way). It would be good to enable more people to live in such a way that the traditional means are readily available, but for better or worse, our society has developed in a way that puts that out of reach for most people, even more than the capitalist cost of childcare.
Sure, but I think what LesHapablap and myself both lament is that there isn't the choice; some people would love to stay home and take care of their kid(s) at least until they were four years old and ready for primary school, but there's not the opportunity - both parents need to be working nowadays in order to pay mortgages etc.
Which is how you end up with the absurdity of the mother (usually) working a job where the majority of her pay goes on childcare so she can work a job.
This. Also, another cynical angle - a word where a mother works to pay or the costs of childcare is one with a higher GDP than one where the mother stays at home to look after her own children, since the work of looking after children is now visible to the state.
I see this a lot, but this is just gains from trade. You can make the same argument about getting a job vs. growing your own food or building your own home just pushes money around and makes GDP look higher.
But if I'm a better programmer than I am a farmer/butcher or construction worker, then I'm actually materially better off making the trade and society is better off if people do this in general, and I've never seen the argument why this doesn't apply to child care.
It isn't the same argument unless professional child-rearing, e.g. in the preschool setting, is superior to what the child can get at home -- which nobody at all believes. So the situation is more similar to a peasant who has to plow a field with a stick, inefficiently, because he can't save up enough money to buy a good plow (and horse). In this case, the family can't supply enough saved money (e.g. through one parent earning enough for all) to buy the most efficient and effective possible child-rearing -- which is done by one parent.
You are "actually materially better off" if you drink a government-provided slurry with all your necessary nutrients instead of cooking yourself or buying food from restaurants. And society would be better off too. And you can use your extra pocket money to bid on a house someday. Actually you may have to take the government slurry in order to afford the house, since everyone else bidding on the house has already done so.
Whether it's a real benefit verses a shadow benefit will depend on several factors. One, is how many children can be "properly" cared for by an individual. If a daycare worker cares for approximately the same number of children as a parent, then that's not adding much on the daycare side (the parent may be doing more economically valuable work than the daycare worker, so that's potentially important still). I am scare quoting "properly" because I have significant doubts that the kind of upbringing a child can get in daycare is equivalent to what they can get from a parent who is regularly with them. Obviously huge caveats here, and some children are better off just being taken away from their parents, so there's certainly some room there to justify childcare no matter how reasonably poor the daycare is.
I'm more concerned about the quality of industrialized daycare than I am the current implementation, to be sure. Right now, the parents can make an economic choice about whether their productivity in society is greater than the cost in terms of day care. Put simply, if they make less money than daycare costs, they stay home. Generally, they would (and should!) stay home even if the costs are anywhere close, like making $20,000/year and paying $12,000/year for day care. Once you factor in costs related to work, like additional clothing, driving costs, etc., that ratio really isn't worth it. That's not even considering the value of spending time with your own kids.
Excellent point, and I appreciated Andrew Yang's efforts to popularize the idea of trying to measure this type of value and optimize for it rather than just for GDP. There's a bit of a streetlight effect where optimizing for GDP happens because it's easier, not because it captures the overall well-being of society.
Smart Codex Readers: Convince me that COVID Vaccine Passports are a good Idea.
Currently against them as I find them
1) counterproductive and dissuade trust,
2) discriminatory and segregationist
3)Onerous for implementation
4) Seem to discount natural immunity
5) Inevitable Privacy Concerns
6) No Methodological Target for Removal . Fear that passport requirement will shift from unvax to 'undervaxed"
7) Herding Effect where unvax further congregate among themselves
8) Divisive and leads to distrust of public health institutions.
9) Seem to ignore majority of exposure occurs inter familiar and scape goats public spaces
10) COVID vaccines are not sterilizing making vaccine passports essentially worthless.
11) my libertarian bias where this feels wrong.
Looking for arguments in favor:
Should Hawaii where I live implement this policy?
Does a temporary health vaccine passport potentially raise vaccine rates, improve "public perception of safety" , lower hospitalizations/deaths divisive policies that could be ineffective, counterproductive and possibly lead to worse health outcomes and trust.
Would your "libertarian bias" feel better if instead of yes/no admission and vaccine passports, that businesses simply implemented separate sections and differential pricing?
For example, I am absolutely positive that if airlines offered "vaccinated only" flights, that (1) LOTS of people would be willing to pay more to be on those flights, and (2) if the airline determined to charge more for fare on the unvaxxed fights to offset their risks in serving those passengers, it would motivate more people to be vaccinated. Is that somehow better in your mind then if airlines just simply started requiring vaccination passports for all passengers and stopped serving the unvaccinated?
I don't find any of your other points compelling. I see no privacy concern or slippery slope with showing people and businesses that you desire to interact with a simple verification of your vaccine status. Nor do I care if unvaxxed people feel that it is divisive...given that they are the ones acting like petulant anti-social babies, I could care less. The rest of us don't want to be around them. Just like in the 80s and 90s, society decided they didn't want to be around smokers, so that no one can smoke anywhere nowadays but their own home or cordoned-off smoking sections.
My general reaction is that EVERY time there is any sort of change at all in public policy...whether that is making DUI a felony or requiring seatbelts or issuing social security cards or making bars and restaurants non-smoking or updating a user interface or literally *anything*, ever, people yell and scream and complain and gnash their teeth and then the change happens and everyone gets over it and forgets about it two weeks later. The unvaccinated are acting that way and making a huge ado about nothing, at best, and at worst making a terrible health decision for themselves merely out of spite and wanting to stick it to libs/elites/whoever they think they're sticking it to.
I would imagine that a place like Hawaii, which is hugely dependent upon tourism from the affluent and older crowds (who are the most vaccinated people), would benefit from implementing the policy. The cosmopolitan frequent-travelers with lots of disposable income are not the people making up the anti-vax crowd.
I know its a meme that the unvaccinated are dissenting to spite you, but we really aren't. There are 2.5 as many reported deaths in the VAERS database for COVID vaccines as every other vaccine in the database combined (4,831 vs 1,919). Maybe that doesn't worry you, but can you at least admit it is reasonable for a person to worry about it?
Block is correct that there are a bunch of reasonable-sounding fellows headed by Brett Weinstein going around telling people that "the vaccine" is dangerous (and not making an effort, when I watched them in June, to distinguish between the 4 available vaccines in the two countries discussed).
I am now convinced that their apparent reasonableness is only skin deep, as it seemed like they never considered any alternative hypotheses before concluding "vaccines dangerous", and even at that time they were misrepresenting some data they had access to in obvious ways.
Is what you consider "reasonable" really rational, though? I guess this gets back to the whole rationality thing and underlying basis for ACT. You present two sets of numbers and say, "Hey, look at the difference here between COVID vaccines and the other vaccines!" Sounds like a reasonable to at least note the difference in rates. But you posted these two numbers without putting them in the context of any other data. If I put that 4,831 number into context of 190,000,000 people in the United States who've received at least one dose fo the vaccine that 4,800 number seems a lot less concerning.
Just looking at the US numbers though shows that we have an overall 1.7% chance of dying from COVID-19 infection and a 0.0019% chance of dying from one of the vaccines. So that makes it 895% more likely to die from the disease than from the vaccine. Now, you can burrow into those basic numbers and argue that IFR is lower because of all the undiagnosed cases, but still that difference is pretty damn big. It's hard to not to argue that your chances are better with the vaccine than a COVID-19 infection.
So, to answer your question, I don't think it's a reasonable thing to worry about, because I think your cognitive bias against the COVID-19 vaccines is making you latch on to data that puts them in a negative light.
I take your point, but your analysis leaves out a few things too. You assume a binary choice: "Either I get the vaccine or I will catch COVID" but you might not actually ever catch it. The choice is between a small danger today vs the possibility of a larger danger in the future.
Even that, however, assumes near 100% effectiveness of the vaccines, which at this point is clearly untrue. In my home state (Vermont), around 25% cases in the last couple of months were among vaccinated people. Also, its a small sample size, but the death rates have actually been higher in breakthrough cases than the overall rate. All of which muddies the water enough that I still contend it is reasonable to at least wait for more data.
You can believe whatever damn fool thing you want, but numbers argue against what you believe.
1. Current estimates are that Delta has an R value around 7. This number is based on in vitro studies (rather than epidemiological surveys), but that puts R value of SARS-CoV-2 3x higher than the 1918 H1N1 flu. And that's 2x-3x the R value of the common cold. You've gotten the common cold, haven't you? You've gotten the flu, haven't you? Why do you think you can avoid catching COVID-19 unless you have special powers that exempt yourself from the laws of probability?
2. Or maybe you're sitting by yourself in your survival bunker waiting for the Zombie/COVID apocalypse to pass. But SARS-CoV-2 is a Coronavirus, and in the past Coronaviruses haven't shown themselves to be amenable to herd immunity. COVID-19 will likely become COVID-22, -23, and -24. It might mutate into a relatively harmless cold-like virus. Or it might get worse. We'll see. But when you run out of dried beans and beef jerky you're going to have to come out your bunker and deal with people who might be contagious.
3. The fact that vaccines are less effective against the Delta variant is actually more reason to get vaccinated — because the vaxxed now have a higher probability of being contagious. So, you won't be able to depend on the vaccinated to protect you.
4. Also, from a mathematical perspective, if Vaccine X is only 80 percent effective against Variant Y, and 80 percent of population has been vaccinated with Vaccine X, and 20 percent of the population remains unvaccinated, then the pool of potential breakthrough cases and pool of unvaccinated are of equal size. So, you'd expect 50 percent of the cases to be in the vaccinated population and 50 percent in the unvaccinated population. The fact that Vermont has a 25% breakthrough rate against Delta still shows that COVID-19 is preferentially attacking the unvaccinated. If you look at the relative percentages hospitalized, if Vermont is like other states, probably only 3% of the hospitalized for COVID-19 will be from the vaccinated population.
>You've gotten the common cold, haven't you? You've gotten the flu, haven't you?
I don't know about Block, but I've never gotten the flu, and I'm no youngling.
More importantly, "the common cold" is about two hundred immunologically distinct diseases with nearly identical symptoms, that cycle in and out of circulation as herds become mostly immune to the last one. Most people will at some point catch *a* cold, very few people catch *all 200* colds.
"The flu" is a couple dozen immunologically distinct diseases, and ditto.
COVID-19, unless something weird happens going forward, is basically one disease. Several strains but they're immunologically related and one is clearly dominant. So your intuition is off. Most people contract a minority of circulating colds and a minority of circulating influenzas, which suggests that the probability of contracting any one specific disease is <<50%.
For the average vaccinated person, the odds are pretty good that they will someday catch *a coronavirus*, but also pretty good that they will never contract Covid-19. Most likely, the coronavirus they catch will be one of the ones already included in "the common cold".
I'm sure some people are legitimately afraid of the vaccine, but plenty of others are in fact doing it out of spite and because "they don't like being told what to do." I personally know some people in that camp.
I think even if it could be validated that those 5,000 people actually died from the vaccine and not something else, that is out of 200 Million people and certainly much lower than serious illness, long covid, or deaths from Covid itself. And likely caused almost entirely by anaphylaxis, which could be avoided by waiting an hour after getting the vaccine with a location where there's an Epipen available. I get allergy shots once a month and have to bring an epipen and wait half an hour before I can leave the Dr's office each time...not a big deal, and reasonable any time a severe allergic reaction could occur.
Look, I had pretty severe side effects from my first dose, which I was NOT expecting...3 days feeling like a bad flu, full body aches, severe headache. I was scared to get the second one. But the second one was actually easier than the first (perhaps just because I was mentally prepared) and now that I'm through it I am VERY happy to be vaxxed. Furthermore, it isn't about fear of dying, as I think the chances that I would die of Covid are about the chance I'll die from a lightening strike...I am healthy, skinny, and under 50. But it's still worthwhile to know I'm reducing my chance of keeping it spreading, putting elderly people at risk, and reducing my chance of feeling like crap for two weeks straight or getting long Covid. If everyone had just gotten the shots this spring when they were available, we would be very unlikely to be dealing with this current surge, new mask mandates, and people in the ICU, which would be a good thing.
" everyone had just gotten the shots this spring when they were available, we would be very unlikely to be dealing with this current surge, new mask mandates, and people in the ICU, which would be a good thing."
I recommend you look at the data out of Israel to see how this is likely false. They are 80% _+vaccinated have have the most hospitalizations ever recorded during pandemic.
Similarly in Hawaii we never ended restrictions, no freedom day for us for mask s etc with nearly 83% of adults vaccinated we're having the most cases ever.
In regards to mandate - i have no issue with private businesses (Your airplane comparison to smokers) doing their own thing. I 'm still against the mandate from the government as in my personal experience it degrades trust with communities especially taking into the colonial account of the island..
i work in public health vaccination drives and have heard every fear, question, concern possible - its a very complex issue.
the best way to convince an unvaxed individual I have found is the following;
'The known current risks associated with covid are greater than the risks associated with the vaccine. the disease is endemic and you will meet it in the future. Your choice is to meet it vaccinated or not. Being vaccinated will make it easier and milder.'
shaming does not work. Love, empathy, understanding, harm reduction models are more productive.
Given that there is a 5% rate of side effects that "prevented daily activities/work/required a medical visit" for each dose, some people are suggesting that the vaccine doses that were chosen were much too high; I even saw some report on Twitter showing good vaccine efficacy given 3 tiny doses (like 1% of a normal dose). Given that small doses would probably reduce these side effects and allow the entire world to be vaccinated, I hope small doses get more attention, quickly.
1. Not sure what you're talking about here. If you only want to allow people with vaccines to travel, a vaccine passport seems tautologically like it couldn't be counterproductive. As for dissuading trust, I mean, maybe, but Trust but Verify has been an iron law for a while now. Wouldn't you feel more comfortable knowing that people coming to your home were way less likely to be infected under this new regime?
2. Sure... but discriminatory against a body we want to reduce. That's like saying welfare programs are discriminatory against the poor, since hopefully it would make less of them. Something being discriminatory against a population isn't inherently bad, murder laws are discriminatory against the population of would-be murderers. (not saying people who don't get vacced are like murders, just making an analogy.)
3. Onerous... maybe? Setting up a national DB or giving out some kind of ID doesn't seem that onerous, but at scale it probably could be.
4. I guess, but I don't think anyone cares about stressing the capacity of natural immunity. As a society, we don't gain value by giving a shit about natural immunity.
5. In what sense? If you've been vaccinated or not? I don't see a major privacy issue here.
6. Vaccination status is currently a binary, I'm not sure what you're point here means.
7. Ok, but the opposite of that is unvaccinated people moving more freely throughout all of society which is also bad. The existence of people without immunity anywhere is the problem, not where they happen to be located.
8. Maybe... but this can be used to argue for or against any point. If the existence of controversy is enough to defeat any idea then nothing happens.
9. Citation? I haven't heard this. It also seems self-negating (assuming families are closed circles, how does the infection first happen?)
10. fair dinkum, mate. I think the vaccine reduces the duration of potential infection, though, correct? So it presumably decreases risk, but I'll give this point to you.
11. ok. My communitarian biases don't, but we can't fight our initial impulses.
Overall I think that the cost here is pretty little, the potential gain is sufficient (both in terms of reducing the potential number of infected people traveling and increasing trust from people in the areas being travelled to that their visitors aren't infected) that it's worth doing.
On July 21st, after a surge in cases (~10k% increase) caused by unvaccinated tourists, Malta started requiring proof of vaccination for all travellers (previously a negative PCR test was sufficient). At that point, the 7-day average case count was 203. One month later, it was down to 62. So vaccine passports do seem to work better than PCR tests, and the alternative of shutting down international travel altogether again would seem less liberal.
I don't understand what mechanism you are proposing where the purported decrease in tourism would lead to R < 1. I believe that the mechanism I am proposing, where keeping a low proportion of unvaccinated in the population leads to R < 1, is generally accepted.
Tourists by definition travel to many places and have contact with many people in the places they visit, so they are a vector for spread within a region. R = (# of contacts per unit time) * (probability of transmission per contact) * (duration of infection).
Vaccines affect the second and third factors, but tourism influences the first factor. So if tourism tanked you'd still see a drop in cases, even without any additional restrictions on who can visit.
For example, if Malta typically has 10,000 people visiting per day, but 9,000 of those cancelled their plans because of the surge you described, don't you think that would have a noticeable influence on R? I don't know what the actual numbers are, but it's a hypothesis that also fits the information you've provided.
I haven't heard of any cancellations based on fear of COVID, only of cancellations because of the restrictions (i.e. by those without vaccine passports). This lowers the probability that such cancellations are widespread. I suppose that in a couple of weeks, when the airport releases its August numbers, we will be able to rule out definitively the hypothesis that tourism declined by 90% between July and August. We already know that unvaccinated tourism declined by 100%.
Do you mean an actual passport, meaning a document that lets you travel interstate, or internationally? I'm a little bemused that that idea is even controversial, at least for international travel. When I was a kid I had occasion to travel (with my family) internationally, and it was routine that you needed documentation of certain vaccines to go to certain countries. I vaguely recall having a little yellow book, some kind of "International Vaccine Certificate" that my mother carried around with my passport. Different countries required different vaccines, and I remember one country in particular cost me about 3 separate visits to the doc's office to get stuck with needles. Boo.
Anyway, I think the concept is only really new and surprising to young First World people who have grown up for the past 30 years or so in a world in which infectious disease seemed to have been thoroughly vanquished. (Surprise! Turns out it wasn't...Mother Nature is a sneaky bitch.)
What you may want to ask yourself is: would your philosophical reasons stand, if COVID had a mortality rate like the Black Death (say 60-80%)? That is, if one person sneaking in from some other ravaged country could cause 80% of Honolulu to snuff it within a 4 weeks, say, would you still be firmly opposed to burdening that poor soul with the necessity to get a vaccine if he wanted to visit, and prove it? If your answer is "hell no!" then your opposition isn't really philosophical at all, it's practical -- you're just saying *in this particular case, for this particular disease* the civil liberties infringement is not worth the benefit to public health.
Unfortunately, if the core argument is practical, then reasonable men can disagree about the exact cost/benefit ratio required to support or oppose the idea, and it comes down to a lot of gritty detail, much of which is actually unknown because we lack the needed data.
3) Not at all. Everyone has smartphones, it's not hard. Plenty of places have implemented QR code passes.
4) "Among Kentucky residents infected with SARS-CoV-2 in 2020, vaccination status of those reinfected during May–June 2021 was compared with that of residents who were not reinfected. In this case-control study, being unvaccinated was associated with 2.34 times the odds of reinfection compared with being fully vaccinated." https://www.cdc.gov/mmwr/volumes/70/wr/mm7032e1.htm?s_cid=mm7032e1_w
5) There's not really a way to quantify the value of privacy. But consider: how protective were you of your vaccination status (MMR, etc) in 2019?
6) This is a slippery slope fallacy.
7) The unvaxed would also be incentivized to get vaccinated.
8) Why would a private entity, say a bar or restaurant, implementing a vaccine mandate have any bearing on the public's trust in a public health institution?
9) So then you'd be in favor of them in high-risk areas, like indoor dining?
10) I'm not sure what you think sterilizing means.
11) It's good that you're upfront that this is your bias, but maybe try thinking about it this way: wouldn't it be governmental overreach to ban a private business owner from being able to run their business how they see fit, including their choice on requiring proof of vaccination? Why don't you value the freedom of business owners?
Re: Hawaii, only about half of the population of Hawaii has been vaccinated. All else equal, would the world be better if no other Hawaiian got vaccinated, or if every remaining Hawaiian got vaccinated? Again, all else equal. Forget the yeah-buts, forget that but-what-abouts, just all else equal. Is it better with no more vaccinations, or everyone getting more vaccinations?
1. Note on Counter Productive Nature of Vaccine Passport:
By creating division between both vaccinated and unvaccinated populations vaccine mandates public trust in institutions is damaged. Personally as a member of vaccination clinics I have met several individuals who are unvaccinated and are doubling down against vaccination because of loss of trust in government and feelings of coercion. Future health issues could be affected by this loss of trust. In Hawaii history of colonialism etc further complicates communicating and establishing trust with the most unvaccinated. Further loss of "privileges" and the carrot of a mask free society + the fear of booster infinity make it hard to connect.
Counter research to your note Research from across Europe shows that compelling people to take vaccines does not necessarily result in higher uptake of vaccines. Further, statistics show that the UK has some of the most positive attitudes towards vaccines across Europe, and that the top 5 European nations for positive attitudes towards vaccinations all have voluntary vaccination policies. The European nations with the most negative attitudes towards vaccinations include those with mandatory vaccination policies: Hungary, Slovakia and Croatia.
Here are two links by exploring the counter productive nature of passports:
Vax Passports Are a Bad Idea
— Prasad explores the balance of possible (but uncertain) health benefits and social harms
Covid-19 vaccine passports will harm sustainable development
British Medical Journal
”Vaccine passports interfere with that future as they create a structural barrier to sustainable development, benefiting only the few at the expense of so many.
2. Yes tautological but currently the most under n Hawaii the most under vaccinated populations are Filipino, Hawaiian American,and Black communities compared to Asian / White populations. In similar In New York CIty close to 60% Black Americans have not been vaccinated (https://www1.nyc.gov/site/doh/covid/covid-19-data-vaccines.page). One cannot support a policy that discriminates and segregates differences in vaccination by gender, race, or socioeconomics.The vaccine passport may serve to perpetuate inequality, segregation and structural racism.
3). Not everyone has smartphones. Age issues make it further complex. I work in public health and technology is a major issue with under vaccinated populations. Luckily the over 65 we've hit probably close to 87% vaccinated so this seems mute.
4) There are several studies comparing natural vs vaccine infection and is complex. I agree though looks like getting someone who had natural immunity is benefited from vaccination.
5) My Vaccination Status in 2019 was on a old piece of paper. My daughter's is digital. The COVID vaccine passport is presented everytime you enter a restaurant/etc and linked to a digital system.
6). The entire COVID panedmic has been a slippery slope. To me I could accept a vaccine passport if it had a clear end date / end target (say 80% Vaccination etc) and review.
7) Herding leads t o echo chambers. Having unvaxxed see vaxxed living healthy post vaccination is best way to convince them against their fears at least in my opinion. I'd like to see data from NYC on how their vaccine passport program is affecting vaccination rates.
8) Talking to unvax individuals, there are flash points of aggression. IE: ustomers angry, new labor costs with staff/ownership of not wanting the role of health bouncer/enforcer with potential for aggression & flash points at premises doors. Few want to operate in a checkpoint society where you need to show ID/Vax cards daily. Similarly individuals who the very same people who just months ago claimed voter ID is racist because minorities don't have IDs are suddenly acknowledging everyone needs ID creating further distrust.
9). I think employer mandates and targeted vaccines drives are more productive. Most contagion occurs within the household unit and less cases are from restaurants/public spaces in contract tracing data from Hawaii.
10) Sterilizing in vaccination means that it cuts transmission. Vaccinated individuals are spreading COVID making vaccine passports essentially worthless. If both vaccinated and unvaccinated individuals can spread disease then there is limited logic in such an intervention...
11) Yes. I have no problem with individual business decisions to mandate vaccine requirements. I find the top down approach in effective.
>>
Re Hawaii: Hawaii has 69% of general population vaccinated with 1 shot , 62% with Full. This is for all ages. Eliminate the kids and we already have the majority of at risk individuals vaccinated so to be targeting a vaccine mandate program that would benefit the wealthiest individuals on the island who are already vaccinated probably is a distraction from other interventions that could be more effective.
Its funny the strongest argument FOR a vaccine passport appeals to the left authoritarian in me: A successful vaccine passport system could provide the government an opportunity to nudge other health outcomes. For example, individuals with a high body mass index could be limited in the types of restaurants they visit or purchase. Alcoholics could be restricted in entering bars. Similarly individuals with other communicable diseases could be tracked to ensure they do not cause spread within public settings expanding Vaccine Passport access points shifting.
Ultimately the disease is moving to endemic stage and the vaccines are serving more as a prophylaxis against serious outcome.
So as Tyler Cowen says : Just get vaccinated and live your life.
> 8) Talking to unvax individuals, there are flash points of aggression. IE: ustomers angry, new labor costs with staff/ownership of not wanting the role of health bouncer/enforcer with potential for aggression & flash points at premises doors. Few want to operate in a checkpoint society where you need to show ID/Vax cards daily. Similarly individuals who the very same people who just months ago claimed voter ID is racist because minorities don't have IDs are suddenly acknowledging everyone needs ID creating further distrust.
I think this runs together several discussions that are often run together, but need to be separated.
Right now you need to show a drivers license if stopped by a police officer while driving. You need to show a proof of age (in practice, usually drivers license, but occasionally passport) if you want to buy alcohol or tobacco or cannabis. You need to show proof of employment eligibility (usually Social Security card, sometimes passport) if you want to start a new job. In many states you are required to show ID (usually drivers license, but some other forms are allowed) in order to vote. You need to show proof of vaccination (usually MMR and/or meningitis) if you want to enroll in elementary school or university. You need to show proof of identity to board an airplane (usually drivers license domestically, and passport internationally, with special visas and vaccination certificate sometimes required depending on origin and destination countries).
All of this suggests that we already live in a "checkpoint society" where you need to show ID/vax cards daily - just that most of the purposes are served by the drivers license and only a few require vaccination or passport.
Some people have pointed to this fact and said that extending the ID requirement to voting is a small enough imposition, because voting is a rare enough activity, and enough people have ID already because of the other things. The response that is usually given is that voting is an important enough activity, and the forms of ID that are required are usually enough of a hassle and cost to get, that it's not worth the benefit.
Putting vaccination requirements on stores, restaurants, and concerts would be making the requirement stricter. But the vaccination card is much easier to get than a drivers license (for instance, in most states, you can get a vaccinator to come to your address for free to give the vaccine, if you and four other people are willing to get vaccinated - and if not, you can still go to any CVS, Walgreens, Walmart, or many other common chains and get it for free, rather than having one location per county as with other IDs).
I think most current opponents of voter ID would drop their opposition if it were as easy for everyone to get a free voter ID as it is for everyone to get vaccinated. (Though I haven't actually checked whether getting vaccinated requires the same sort of ID that voter ID laws do, in which case this really is a problematic imposition.)
> In many states you are required to show ID (usually drivers license, but some other forms are allowed) in order to vote
That reminds me of how much ire I have for the people who decided showing ID for exactly one of [vaccine, voting] was horrible, but showing it for the other is completely normal.
I used my "vaccine passport" when I went into San Francisco the other night to go out to a bar and then to dine. It's a Q-code that I carry around on my iPhone that loaded from the state vaccination website. I showed my drivers license and my vaccination passport, and I was able to sit at the bar alongside other vaccinated people. Although that passport won't necessarily prevent me from getting a breakthrough Delta infection, at least I know the people around me are also taking precautions. It was very reassuring.
One worry I have, as a Texas resident, is that I will be locked out of these systems. I tried to get the California app and the New York app when I heard about them, but both of those apps say they can only verify vaccinations conducted at locations inside their state. I really don't want to have to bring an important paper document with me to bars when I'm traveling, but I worry that my state's anti-passport stance will make it hard for me to verify my status any other way.
They'll accept a photo of you paper vaccine card, along with a valid driver's license. I'm sure people from out of state will start gaming the system, but it wouldn't be the California peeps — who would be the majority of he customer base in most places except for some big tourist destinations like Disneyland. Las Vegas would be totally different story, though...
My understanding is that there are two main ideas for vaccine passport implementations.
The first are for travel accross borders (so for example people coming to Hawaii). With breakthrough cases, it seems like it wouldn't be particularly effective, so no comment.
Then there are things like what NYC is doing, and smarter ways of doing it like in some European countries, where it also accepts a proof-positive of COVID from over 2 weeks ago. Let's call these "bludgeoning passports." My understanding is that their purpose is really just to bludgeon the populace into getting vaccinated by making life much more difficult for the unvaccinated. I think it's pretty clear why this would be effective, so I think that if you value having the vast majority of the populace vaccinated, they're a good tool.
Now, I think the main people at risk of COVID from not having enough unvaccinated people are unvaccinated people, who would largely oppose this policy. But, for politicians who need to keep case numbers down and vaccination rates up in order to get reelected, it's clearly a valuable policy.
"Bludgeoning" is a strong word. State mandated arm twisting seems more accurate. And moreover it will probably work except for the never-vaxxers. In France, Emmanuel Macron announced that people would need to proof of vaccination to get into cafes and restaurants, and within a week 1 million citizens made appointments to be vaccinated.
Right now, Hawaii wants proof of vaccination before you visit. They might accept a COVID test and period of quarantine, but since I have my vax passport I didn't bother to investigate what the non-passported do.
I'm surprised governments have offered financial carrots, though. What if the Feds offered a $1,000 deduction off our taxes with proof of vaccination? Of course, states like NY and CA wouldn't have a problem providing proof to residents for tax purposes. Citizens of states like TX and FL would be left out of the cold. Which would put pressure on those states to implement vaccine tracking apps.
Vaccine passports only make sense in clinical settings:
1. Doctors/nurses should be vaccinated so that they don't spread COVID to their patients. Same applies to all other vaccines. Sorry but you don't get a choice if you're treating vulnerable people face-to-face.
2. During COVID surges that fill up hospitals it makes sense to use vaxx passports to deprioritize those who rejected the vaccine. It should be your right to reject it but then if hospitals are full you should be the last person to receive any treatment. Choices have consequences.
Other than that I agree that they're a waste of time.
Wouldn't vaccination passports have the same sort of value as drivers licenses? The point is that it is a document that certifies that you are lower risk for doing this activity in public spaces, and you are banned from doing these activities in public spaces unless you have the document certifying your lower, but non-zero, risk.
Do we apply the "choices have consequences" to any other medical condition? It doesn't get applied to obesity, smoking, drinking, DUI, even criminals get medicial care. But yet someone chooses not to get a vaccine and now "no treatment for you!".
That is not true in certain circumstances. For example, you won't be eligible for an organ transplant if you have disastrous personal lifestyle choices. Indeed, I would expect a long criminal history might even disqualify you on the grounds that you're terrible at self-discipline.
Good point. That is similar circumstances/conditions, though usually the timescale is a lot bigger (months or years) and the supply is much smaller. I am not 100% opposed to triage based on vaccination status IF the science shows it is a large factor for similar condition patients. But to me it sure seems like there are many other factors that we know right now contribute massively to having issues. So yes if two patients are equally obese and the same age and both have high blood pressure and one is vaccinated and the other isn't, well choose the one who was vaccinated. But I don't think you get to that level of granularity for triage normally.
Is that true? I believe that organ transplants prioritize people who are most likely to benefit from the transplant, but you aren't removed from the list because you have bad personal choices - you are just de-prioritized if you (whether because of personal choices or genetics or age or anything else) are unlikely to derive as much benefit from the new organ as someone else.
Kind of splitting hairs there. Yes, if you make "disastrous personal lifestyle choices" (my phrase) and they end up having no significant effect on your health -- you eat too much, but are somehow not obese, you drink but somehow avoid any harm to your liver, you shoot up but magically don't have hepatitis, you live on the streets in a cardboard box, but remarkably enough make all your appointments on time and have 5 or 6 friends who say they'll help take care of you -- then, yes, you wouldn't be disqualified.
Huh? It doesn't matter whether the "lifestyle choices" have a significant effect on your health - it matters whether your condition is going to be able to be significantly improved by an organ transplant. An obese person is often going to have much better prognosis than someone who made the lifestyle choice of being old. And living on the streets in a cardboard box (let's ignore the question of whether someone is *choosing* that state) doesn't obviously seem like it's going to harm your prognosis more than many other states.
Obesity, smoking, drinking, DUIs, etc cannot be instantly solved with a shot in the arm. If you could instantly switch to a healthy diet with just one shot, then sure, lets discriminate against the obese. All the things you've mentioned are complex problems that people spend years working on, not something you can fix at your closest pharmacy.
A "shot in the arm" does not instantly solve anything. It takes several weeks to start taking effect and up to a month or more for full immunity. People can lose quite a bit of weight in a similar timeframe. Also we have known obesity is a factor for a long time, so the obese have no excuse either. And of course a DUI is literally a choice to drive after drinking. To me it just sounds like you are making excuses.
My understanding is that obesity is quite stigmatized in the medical industry and obese people have trouble accessing treatment and being taken seriously, and doctors tend to attribute every problem to the patient's weight whether this is merited or not. Then again my understanding is also that obesity is not so much about choices either.
Also: we ration medical care in various ways, including by ability to pay. During triage doctors ration medical care based on who is most likely to receive the most benefit from the doctor's time. Rationing by vaccine status, if rationing does become necessary, seems morally better than flipping a coin.
Very much the case. I've been very fat. I've been not terribly fat. The rest of society has some very subtle snubbing and bias against fat people. My experience with doctors - including some bariatric doctors - is that they have some very overt bias against fat people.
Sounds like avoiding the vaccine isn't a choice either. It is the culture and climate they live in.
I am also not aware of any science that says that for those in a similar condition, the vaccine means they will have better outcomes. I 100% agree the vaccine results in better outcomes, but when someone needs the ICU, vaccine status may or may not matter at that point.
If obesity isn’t a choice, can we imagine anything as a choice? Putting food in ones mouth is quite intentional. And if it’s “base urges” driving it and not individual choice, what isn’t?
Calling an outcome a "choice" means that the outcome was a foreseeable consequence of an intentional action. If I intentionally eat two donuts every day knowing it will make me obese, I'm choosing obesity.
But what if I eat only meals recommended to me by some dietary expert - even if I don't much like them, and even if I'm unpleasantly hungry in between meals - and I still end up obese? Did I choose that?
Because that's what all the studies finding that various professionally-designed and administered diets fail to impact obesity are saying: the patient did all the "right" things and made all the "right" choices and still got a bad outcome.
I don't think we have any grounds to say the person should have known better if even our best dietary experts have no consensus on how to safely, reliably, and sustainably treat obesity.
For one, there is a trivially and universally stated and obvious relationship between calories eaten and obesity - eat less, lose weight. That satisfies the “knowledge beforehand” criteria you suggested, and it isn’t followed. If you observe obesity or overweight, which most do, you can eat less. Calories are not exactly an unknown, and calorie counting is a prominent habit.
The experts prescribe “eat less” alongside diets. And that isn’t followed.
The patient didn’t make the right choice - they could’ve just not eaten as much food. And yes also they should reduce processed food more fruit and beg and whatever, but eating less is the baseline.
Experts do have a consensus: put less food in mouth. But nobody actually does it, so they struggle for other ideas.
In the U.K. we recently had one of our first Incel terror attacks (if that’s how you want to call them), the killer was 22.
A few months ago (Feb) we also had a high profile case of a police officer kidnapping, raping and murdering a young woman that sparked a heated discussions about violence against women. The killer was 48.
What I find so interesting is that these two cases fit almost perfectly as examples of the bimodal distribution of mass killers, where younger mass killers have average age of 23 and older killers have average age of 41, but there aren’t that many 33 year old killers.
Perhaps tangential to what you're aiming to discuss, but to describe the recent UK murders as an "incel terror attack" feels like a media beat-up; to my knowledge, there's no kind of manifesto, and the pattern of the attacks wasn't in any way tied to anything about sex - it was his mother whom he claimed was abusive, followed by the first people he ran into on the street outside. Awful, certainly, but hardly a politically motivated terror attack.
'terrorism' has been a meaningless term for decades now, but I do think the incel movement (or parts of it) is uncommonly well-suited for pushing unstable people into these types of attacks. It centers on two primary narratives: your life is meaningless and hopeless and can never possibly be good and you might as well die, and the reason your life is so bad is because the world itself is awful and the people in it are evil in various ways and everyone is deserving of your contempt and hatred. That seems like a perfectly-calibrated narrative to push people into 'I should commit suicide, preferably while killing as many people as possible.'
Of course, we're never going to know whether the movement had any significant causal impact in this case, we as distant strangers on the internet can't know that level of detail about something that happened in the past and is this ambiguous. But it certainly wouldn't surprise me, and I do expect more and more situations like this to arise from people subjected to the movement.
It's also interesting that these folks never seem to get much public media sympathy. Islamic terrorists who travel internationally to commit murder get at least some sophistry about how they are oppressed in their native lands or some such thing and deserve our pity. Not so much for these folks.
Well, both the progressives and the traditionalists despise this demographic, so it's not that surprising. It'd take a saint or a heterodox contrarian to extend sympathy to them, who aren't exactly the mass media clout wielding types.
This is probably very much a solved/simple question, but for prediction markets, how should it be determined when a given prediction should be declared, if there is no simple algorithmic way to determine what happened without human testimony? (Think pretty much all political events outside of crypto and stock movements) Right now that sort of thing (like when it’s decided who won an election) seems to be decided by site admins, who are both dalliance and potentially quite corruptible by a determined billionare.
AFAIK the usual way these sorts of things work is that you spell out criteria clearly in the contract, and then if people disagree whether the contract was followed, they litigate.
Perhaps they could be chosen from a reddit-style forum, where anyone can propose questions and then people (with enough money in their accounts, to prevent spam) vote on which ones to put up to market.
If we want to entirely get rid of site admins, you could probably automate this entire system on the blockchain. Find some mechanism to add a smart contract to the blockchain when there are enough wallets with enough value in them backing the contract, and then people can bet on the event in the contract, which automatically determines payout.
Interesting…I feel like that would probably put relatively strong bounds on the sort of events you can bet on with full trust, as if the complexity of the contract gets too high, it becomes practically impossible to check through it yourself to determine it truly says what the poster claims it says
Accidentally posted that before I could fix typos—should have said “both fallible and potentially quite corruptible by a determined individual or group”
Can someone recommend a good active "analyzing fiction books I'm reading" blog / newsletter / substack / whatever? I run one at http://dreicafe.com but I've been skewing towards satire more than literary analysis, and I'd like to siphon the energies from someone good at the latter because I love such stuff. When I studied creative writing in ages past, they made us keep a 'reading journal' which is a phenom way to process books; reading other people's was super valuable. A lot to be said for seriously mulling on a novel after consuming it. There must be a few blogs like this around?
Although familiar to at least a few here, Gurdjieff. Irrespective of his teaching most people who met him thought he was the most extraordinary human being they'd come across. And to me, that extraordinariness is interesting.
Check out _My Journey with a Mystic_ by Fritz Peters. For some reason Frtitz Peters' family packed him off from NYC (by ocean liner in those days) to Gurdjieff's school in France when he was eleven. Peters says he wasn't consulted in his parents' decision. They just sent him without explanation. It's not clear if Gurdjieff knew he was coming, but Gurdjieff agreed take care of him. He gave Peters the single chore of mowing the lawn to "pay" for his room and board, with the stipulation that he promise to mow it once a week no matter what (and there's a good tail there). Gurdjieff became Peters' mentor and caregiver. He may have been charlatan, but he was a good-hearted charlatan.
Brilliant anecdote “ “… Workers with the most patents often shared lunch or breakfast with a Bell Labs electrical engineer named Harry Nyquist. It wasn’t the case that Nyquist gave them specific ideas. Rather, as one scientist recalled, ‘he drew people out, got them thinking.” (Pg. 135)”
Countess Bathory I already knew, the Baron I did not. I was interested to learn that he came from Graz in Styria (Austria), as Sheridan Le Fanu's "Carmilla" is set there, and many Gothic stories were also set in that vicinity for the usual vampire, werewolf and cursed haunted castles storylines.
Does anyone here happen to be in PR, journalism, or anything relating to media relations? I would like to become more effective at reaching out to media/pitching stories more effectively, for both EA and personal reasons. If anyone prefers to talk privately about this rather than in the thread, I can be reached at yitzilitt (at) gmail (dot) com.
Just out of curiosity how many people on ACT think with words? I only think with words when I have to write something, or when I rehearse a speech, or when I replay a conversation in my head. Otherwise, I seem to making decisions and figuring things out without words. In fact when I speak to people, unless I'm carefully rehearsing what I'm going to say, I don't think the words before I vocalize them.
Bonus question. Subjectively, do you ever try to pin down where in your brain your internal speech is coming from? For me when I silently talk to myself, the talking seems to happening about where my premotor cortex is located — above and behind Broca's Area, which is the part of the brain associated with language. However when I vocalize words, they subjectively seem to be coming out of Wernicke's Area midpoint between my ears (which happens to the area associated with speech processing).
NB: After about ten years of doing Mahayana-style mindfulness meditation (including he basics of Dzogchen Rigpa meditation, which I never really mastered), I not only got used to observing my thoughts and feelings as they arose and passed away, but I became of aware of where in my head they subjectively arising from.
When I'm consciously thinking about a specific topic, I usually have a stream of words as if I were explaining my thoughts to another person. Often I actually vocalize these words. Although sometimes I have a sort of a placeholder where I'm referring to some complex thing that I haven't yet given a compact name to, and this creates a sort of skip in the wordflow where that name would be if I had one.
I feel like this habit has contributed to me being good at explaining technical topics to other people.
Same, I generally have internal dialogue rather than internal monologue - and also same in that I'm pretty good at explaining technical things to people (with a particular specialism in cross discipline explanations, i.e. explaining number things to words people and vice versa)
I'm very surprised that you have a spatial sense of where in your head a thought is coming from! To me, if there is any location, it seems to be about the midpoint between the eyes and ears, since those are the sensory organs I most identify with. But I certainly can't differentiate from this being an illusion caused by over-reliance on those sensory organs vs this being a veridical representation of the neurons that are causing the sensation. The fact that I have on so many occasions learned that tactile sensations are being driven by a different location than the apparent location on my body makes me very suspicious of attempts to introspect a location. But I do think that I can get better at these things, and I have gotten better at locating bodily sensations as I do yoga - though that is primarily because of feedback from very precise bodily movements, which I don't think I have for internal thoughts.
My thought processes are largely just concurrent streams of words going on. I can hold concepts in my head, but the way I process them is via narrative. (think of the blind mice describing the elephant).
That being said, I've got basically no visualization capacity and don't dream, so I'm not sure how normal I am on this spectrum.
Suppose that you (the general "you", not the OP specifically) are driving, and see a potential hazard up ahead. Perhaps a cyclist, that you will shortly want to overtake. Does the word "cyclist" pop into your head? And such phrases as "oncoming traffic", "near-side parked cars", "pedestrian who looks like they might cross the road but hasn't looked in my direction yet", and so on? They don't for me, and when I've been asked during advanced driver training to give the instructor a running commentary on that sort of thing, I've found it impossible. The effort of grabbing all the words for the things I'm seeing and doing is too much of a distraction from the task of making progress while not killing anyone.
I'm a relatively inexperienced driver who was encouraged to start narrating when I was just learning, so that's a confounder, but yes I do generally think about the things around my car in words. It typically sounds a bit like, "Car, car, BIG car woah, pedestrian over there, car, car, use your blinker asshole, car door opening?, car, bike behind me, car..." (Narrating *out loud* during driving is still hard for me, though—it takes more brainpower to move my mouth than to think words in my head.)
I continue to be surprised by how different the subjective experiences of different people can be. I've never ever had a narrated dream. But maybe you're just calling experiences "dreams" that I don't. I have fallen into dreams while thinking in words, with my words getting crazier as I fall deeper into sleep, which might be the same experience you're talking about. But I don't remember any stories or fictions which I dream that I'm participating in being narrated.
I usually perceive myself as thinking in words when actively thinking about something - as opposed to daydreaming, which is mostly imagined senses - but plenty of concepts feel like discrete things in my mind but don't explicitly have words and are genuinely difficult to put into words.
One oddity that might be of interest - when I am thinking in words, it's in a way that is akin to reading or writing, not akin to speaking or hearing. This is probably because I read a *lot* and don't have that much social interaction, relatively speaking. I distinctly recall as a child I dreamed in text!, though at some point as a teenager I started watching movies and playing games more and started to have dreams with visuals and sound. As best I can tell, my imagination can cobble together new composites, but only from discrete sense data that I have actually experienced - fiction makes it far easier to have experienced fantastical visuals than the associated smells and tactile sensations.
Great question, because so much of Western philosophy is based on Plato's presumption that all thought and reason is in words.
I nearly always think, then hear a voice in my head translating my thoughts into words. But I definitely do the thinking first--the idea is fully-formed before the sentences are, as proved, for instance, by the frequency with which I stop, unable to remember the word for a thing despite knowing full well what the thing is--its meaning, shape, appearance, and function.
I've often wished I could think without then phrasing it in words. It would save a great deal of time. But I think casting it into words forces me to put it into a more-logical form, which sometimes reveals holes in my pre-linguistic logic. Tho I can't cite any examples.
Mathematically, I can't think in words, because math is (I think) more powerful than words. I may visualize distributions and graphs in my head, or not visualize them at all, yet intuitively understand what characteristics a phenomenon produced by a particular distribution will have, or what shape the composition of two functions will have, or how it will behave at its extremes, or whether a function's surface will be monotonic, non-monotonic but smooth, or discontinuous,
I'd say I think about 85% in words (mostly the sound of my own voice in my head), 10% spatial reasoning (in particular, I solve equations by rearranging symbols in imagined space and often remember lists and procedures spatially), 5% the odd visual concept, other person's voice in my head, musical idea, etc.
I was going to say that my internal voice comes mostly from the middle of my head, but maybe slightly to the left, before I noticed that I'm sitting with my right ear close to a wall and tried turning my head. Turns out my inner voice actually seems to shift a bit to be coming from where most of the sounds in the room would be coming from. I can also move my inner voice to pretty much any location I want, including outside my skull (farther away = less precise location, though).
I seem to think in words, but I don't think I actually do. When I try to say or write down what I'm thinking I have to stop and figure out how to say it.
I don't see how that counts as evidence? I'm quite sure I think in words, but something about talking out loud or writing seems to reduce my fluency a bit, like... it's just easier to think in my head somehow.
(edit: although I think with words, I think there's also underlying thought going on without words, and that the words just... help, somehow... but I can still have an idea with no corresponding word, which can occupy a silent spot in a mental sentence.)
I don’t think thinking with words is a real thing. It’s more of a a “is the Father or Son or Holy Spirit greater, and are they one, divided, one and divided” sort of question or a discussion over whether fish is a category or a class or a type or a fuzzy boundary. Nobody thinks in words, some people just larp with words while they think.
I have to think in words as I compose this answer to your statement. But I didn't think in words to determine what sort of answer I would give your statement. The words happened after I decided to give you my personal dichotomy example of non-word / word thinking in action.
I think larp might be the wrong verb. Thinking in words makes sense to me as an interface between thought processes and conscious experience. In order to process thoughts, you have to compress and convert them into a format that consciousness can read, whether that's words or pictures or feelings.
Strictly conjecture, of course, but it makes more sense to me if it's a process with a purpose, rather than something self-gratifying.
I expect the actual mechanisms that produce thought are very intricate- a lot of things happening in parallel, a lot of chaos and uncertainty, and a lot of important parts happening at the individual neuron level. That process would be illegible to me, it's got orders of magnitude more parts than the most complex systems I understand.
Whatever I perceive my thought process to be is an abstraction. I think in a self-restructuring network of electric pulses and neurotransmitters. I don't experience anything like that, I experience words. Before a thought enters my conscious experience, it gets translated into words, likely using a lot of processing power. When my executive functions manipulate thoughts, they do so at the word level. So words appear to be a kind of interface between high-level and low-level processes.
For some people, the translation doesn't go to words, it goes to visuals. Those people probably have the same process for producing thought, but the high-level component is in a different format. Often, my thoughts skip translation into anything informational and just output a feeling, especially if that feeling is fear.
So my 'thought processes' are a complex, subconscious system. My 'thought formats' are the encoding used to convey those thoughts to my higher-level functions. My 'conscious experience' exists somewhere above the encoding level and information is routed through it for whatever reason. And my 'feelings' are a particular component of my total conscious experience.
My apologies if I'm not addressing your question correctly. And, again, this is all very speculative, it's just the model that I'm working with. Really, my point of contention is just that it makes more sense if I experience thoughts as words *towards an end* and the system I've described is just a hypothetical that follows from that assumption.
I think in words automatically, but not exclusively. I've noticed that my nonverbal thoughts arise earlier than their verbalizations. The difference in speed is remarkable: I can form a thought in a fraction of a second, but the verbalization can take several seconds to catch up. I can cut it short and leave the thought unverbalized, but it takes effort and focus, like trying to avoid blinking, but even more difficult.
You being aware of where your thoughts are coming from seems highly dubious. The brain, physically, does not have senses. It does not have touch or pain receptors and it absolutely does not have proprioception. There are no neurons that encode the information "this is my own position relative to all the other neurons".
However, the brain does create the *experience* of your consciousness being located in your head. You feel like 'you' are a perceiving entity separate from your body, with the perceiving happening in your head. While it does happen there, the experience is illusory. When people have an out-of-body experience, they tend to perceive themselves as floating above their own body. I believe this is the result of a shift in this illusion of being inside your own head. The origin of your sensations never changes, but the feeling of it can.
So, when you say that you perceive where in your head sensations are arising from, I don't think you're tapping into any neural location data. However, I think it's plausible that you can perceive that different thoughts have different 'signatures' that indicate which brain area they originated from. Just like a face has its own qualia, a 'Wernicke thought' might also. This qualia may or may not be interacting with your thought-location experience, based on your knowledge of what brain areas are likely to be involved.
Is it possible for you to alter your perception of where your thoughts are coming from? Can you attempt to shift it towards something like an out-of-body experience?
As for proprioception, I wouldn't say this skill (or delusion) is related to the body's proprioception systems. It's more of an impression. Like I said, this may be like a phantom limb — except that I don't intuitively believe that that's the explanation because it happened as a side effect of meditation and observing my thoughts.
I will totally admit that when I say I can locate my brain processes internally in my skull is highly dubious, indeed! But it's something that I picked up as a side effect of meditation — actually it came on slowly over ten years of regular meditation of observing my thoughts arise and dissipate — and I'm not sure anyone who hasn't spent years of meditating could do this. Likewise, I've never discussed it with other meditators, so I don't know if anyone else has noticed this. But I thought I'd put the question out there, to see if anyone else has had this experience. I'm perfectly willing to admit that it may be similar the phantom limb phenomenon.
And, yes, I've tried to make myself feel that these processes are happening in other locations of my skull, but with no effect. So, if the original "perception" was due to autosuggestion it's damn hard to autosuggest myself out of that belief! Likewise, I can't shift my perception of "me" and where my thoughts are arising outside of my own cranium — e.g. for example, I can't move them down into my abdominal cavity. Qualia are somewhat different though. Feeling is at the point of where I'm touching something. Taste is in the mouth. Hearing is inside my skull (but that's where my eardrums and cochlea are). Vision seems to be right behind my eyes, though. I've tried to make myself visualize an out of body experiences, but I've never been able to accomplish that way of perceiving my selfhood. The thing that I identify as my identity, the "I" that is the watcher in the Plato's cave, seems to be located midpoint on the line between the front and top of my ears, just below where I imagine that my internal-speech process is running.
> There are no neurons that encode the information "this is my own position relative to all the other neurons".
I don’t see the relevance. Computers don’t have these either and still have strace and can read and analyze their own code. And while that’s ... less of an analogy and more of an unrelated system, it shows you don’t need whatever that kind of neuron is to do whatever a “think about thinking” is
Computers can analyze their own code, but they don't have a sense of where in their memory chips that code is located. And that's more than an analogy; it's the very same phenomenon, just using electric rather than chemical signals, and different representations.
They totally do! In the sense that it is possible to write code that can figure out relative physical locations of memory addresses ("Row Hammer" is one example).
You're right, but I didn't say that neurons by definition cannot do this. They just don't. We can think about thinking because it's advantageous to do so; there is literally no reason for our brain to expend any energy on thinking about where neurons are located relative to each other.
For the most part, I don't know how to think without words. I try sometimes, but broadly speaking my internal monologue just keeps chugging away as usual.
(I honestly have no idea what you mean by the second question. How do you have any kind of spacial awareness for the inside of your brain?)
It may be pure delusional thinking on my part, but after sitting in meditation and getting used to watching my thoughts arise (words, images, your surrounding qualia), after a while it seemed to me that certain processes were located in certain places inside my skull. I guess my counter question would be, have you ever tried to develop a spatial awareness of your mind and its functions?
It seems to me that most people just let their consciousness do what it does with out trying to examine it as it happens. The Buddhists are interested in observing the process as it happens, but there's nothing in Buddhism that says you can't observe *where* the process is happening...
Just chiming in to make N=2: I also either have a spatial sense of my mental processes within my brain or else a delusion that I do.
It makes a little sense that we would have sensory input relating to blood flow in the brain, like an fmri, just because we can sense things like hypertension anyway. But I'm less confident, because I can't see an adaptive advantage. I'm also not blinded, as I knew generally where things were supposed to be before I made those observations. A quick experiment:
where do you feel your visual imagination exists? (I may have been linked to the correct answer from here, so here's hoping you haven't seen the same thing)
One thing that's definitely not placebo is the 'third eye' sensation of pressure behind the center of the forehead. I experience it very intensely while meditating. I understand there is a good chance that it's gland behavior, but it at least adds credence to the idea of a sense input that doesn't apparently produce any advantage.
> where do you feel your visual imagination exists? (I may have been linked to the correct answer from here, so here's hoping you haven't seen the same thing)...
I'll answer you question with a story. For a while there I was studying under a Nyingma instructor. She was big on giving us guided meditation on visualizing images. For instance we were supposed to imagine Avalokiteshvara sitting on a lotus with four arms, one arm holding the lotus flower, one arm holding up a mala, and the two other arms with their palms in a prayer pose. She'd give us his/its/her clothing to visualize and the its crown, and the colors, and so on. These guided meditations would take half an hour at least. Then after we had constructed our bodhisattva image, we were supposed to try to hold those images "in front of us" she said.
Anyway, I just couldn't do it. I kept trying to imagine Avalokiteshvara in front of me, but I couldn't keep the image "together". I'd lose it quickly. I'd try to reconstruct it to catch up with her narration, but that just made everything worse. I asked her for advice, but she just told me to keep practicing. It was one of the most frustrating meditative experiences I've ever had. I left her group and moved on to a Gelug group that just practiced mindfulness meditation.
Some years later I decided to try these visualization practices on my own. I had seen enough images of Avalokiteshvara to know what he/it/she looked like. So I tried to visual he/it/she in front of me like my Nyingma instructor had instructed. No luck. But then, for some reason, I started to try to imagine Avalokiteshvara behind me, so I was sitting at its feet. Very quickly I was able to construct the image! The I realize, that I was really imagining it as sitting in the back of my brain. Well, guess where the visual cortex is? Well, you know the answer.
Oh my. I wasn't going to mention the third eye (!) -- just because thought people may have had a hard enough time dealing with the spaciality of thoughts and qualia in the mind. But I definitely experience the third eye. For me it's a dim internal "light source" right where you described it. It's always there even when I'm not meditating. But once I close my physical eyes and take notice of it, it's like an area of low phosphene activity (similar to what I get with my eyes closed). If go off on meditative tangent and observe the third eye's phosphene patterns they will become more pronounced. Lots of metallic blues and indigos. And the longer I concentrate on it the faster the patterns appear and approach me. Almost as if they're coming out of a tunnel at high speed. I have a friend who just started meditating, and, although she says she never noticed it before she started meditating, she's develop an even more pronounced perception of the third eye than I have (colorful shapes, she says). And she's found hard to meditate with the third eye glaring down on her. As for me, I don't get sucked into it unless I make the effort.
I haven't. Maybe I'll try that. Though I don't currently see how such a thing is possible, biologically/neurologically. Like, my body has sensors and by brain has corresponding regions for tracking where different parts of my body are, so I get spatial awareness of my body that way. But is such a thing true for my thoughts and my brain itself? How/why? (I'm not trying to accuse you of being delusional, just that it doesn't track with my current understanding of how my brain and body work.)
Start by trying to imagine where your your sense of self identity (your "I") is in your brain. Is it up front over the eyes or behind the eyes? Is it at the base of your skull where the spine connects? Is it way in back? Is it top center? Don't over-think it. If it doesn't come immediately to you, revisit the question over the next few days and the coming weeks. Try to remember to try this exercise when you're talking to someone or when you're out physically exercising. See what you find.
To elaborate (in what would be an edit if Substack had an edit button): Words are such a fundamental part of my subjective conscious experience that, when people tell me that they think wordlessly all or most of the time, my (incorrect) instinctive gut response is that they are mistaken, because I can't imagine what that would *be like*. I can imagine thinking wordlessly some of the time — I'm pretty sure I've done it (though it's impossible to notice while I'm doing it, because that would be wordful) — but I can't imagine how a human could think like that all or most of the time.
(To be clear, my head isn't bereft of non-word cognition — it's full of wordless feelings and emotions and sometimes pictures too. It's just that there are very rarely *no* words in the mix.)
What Is It Like To Be A Human Who Thinks Without Words?
I agree with you on the difficulty of noticing that you are thinking wordlessly. I find that my cognition elaborates itself into words when I focus it on itself, and this process is usually to fast and unconscious to notice.
The reason I'm sure that this does happen to me is that I can get myself into a state where my internal monologue is in a language I'm less fluent in, which causes the convert-thoughts-into-words-for-metacognition process to be more difficult and slow down enough to be observable.
There will surely be an Afghanistan thread on this politics-allowed open thread, so I'll start one.
My only-slightly-controversial view: we should try to work with the Taliban. If they view themselves as good men and followers of Allah, presumably they won't engage in Khmer Rouge-style atrocities. If they don't do that, Iran and Pakistan will start fueling a resistance that isn't tainted with American imperialism.
The Khmer Rouge viewed themselves as good men and followers of reason. Look at what they do, not what they say or believe, and certainly not what they say that they believe. We've got plenty of evidence as to what the Taliban do when they think they can get away with it.
Also, as WayUpstate points out, Pakistan *already* fueled a resistance untainted by American imperialism. Which is the Taliban.
Pakistan is an agglomeration of Balochistan, Sindh, Punjabistan, and "frontier areas" populated by Pashtuns. If the situation in Afghanistan threatens populated cities, the government of Pakistan will not give idle acquiescence.
I think you overlooked the fact that the Taliban are a Pakistani-created force and continue to be supported by the Pakistan intelligence organization. Should we work with the Taliban? Only as far as they demonstrate their ability to follow through on any agreement and receive any subsequent punishment for not doing so. I would definitely start from a position of "we doubt the veracity of any of your statements but will see your actions as proof of your ability to build confidence in your ability to govern or make agreements that you have the ability to execute."
I'm not over-looking the Taliban's Pakistani collections at all.
The situation is "trust but verify". Tit-for-tat. And the rational answer is to start from a position of "we trust you, and we know you will be destroyed if our trust is in error".
I’m not sure that is the rational “starting point”, in that we are not at the start of things; the Taliban has a long history to consider. It’s not like they are a “blank slate” regime with no prior history to evaluate.
I think leaving a path for the moderate margin of the group to have some wins and therefore gain popularity is a fairly generalizable strategy for this kind of situation (though the US political system seems to have trouble supporting wins for any part of an “enemy”, cf the moderated in Iran) and so while I would not say that we should trust the Taliban, I think we should try to work with them so that we have a chance to steer their development.
I think the main constraint to consider is to what extent domestic US politics constrains the government's foreign policy choices. It’s not like the government is free to pick any rational policy action they want, free from political concerns. So I think the main things to watch will be whether the Al Qaida-offshoots move back under the Taliban’s wing, and how strongly they regress on human rights. It will be hard to get political support for cooperation from the US if they are actively harboring terrorist groups or executing “infidels” left and right.
Alex, the other evening an Afghan-Australian biophysicist Dr Nouria Salehi was discussing what her contacts in Afghanistan are saying. Her (their) take is that finally there might not be corruption on every corner, and a possibility for an Afghani-led reform now the Taliban are in power. The US-centred corruption the last 20 years did serious damage to the country, so its hard to see how could be worse under a nationalist government (I may have to eat those words but lets see). Apologies to US readers, but there are scant examples of US occupation that benefits an underdeveloped nation without a strong existing national government.
As akarlin said, the loss of 40%GDP worth of subsidies will hurt, and they have to pay civil servants and workers and such to run all the tasks of government, from city to provincial to central. They won’t necessarily succeed!
The "state" of Afghanistan is a post-colonial construct, and it is true that revenue to fund the centralised state may be beyond the Taliban. Although they will have a number of willing donors - money for influence - they may well decide to try to recreate the heavily local nature of "old" Afghanistan. The advantage of that would be that local revenue would support local administration. But the middle class of Kabul wouldn't like that.
I agree that corruption will probably drop, but it seems like there is a decent risk of a lot of state-sponsored murder. But if that doesn't happen, or if it happens in a burst and then largely subsides, I could see Afghanistan's outlook improving from there.
I sort of think that US occupation trapped Afghanistan in a low equilibrium, and now that it's over they have the chance to aim for a higher equilibrium, but they could also just go into a free fall.
It's far from certain that the Taliban will be able to hang on to power. The Afghan budget was 75% dependent on foreign grants, and that has just gone up in smoke. Foreign currency reserves are frozen and the country de facto cut off from the world economy. This is a situation that would test the most ingenious economists and policymakers; we are talking about the Taliban. Last time the Taliban ran a Central Bank, its governor stopped declared the currency worthless, stopped the issue of new notes, and spent more time on the battlefield than in his office. I wrote about this here: https://www.unz.com/akarlin/where-are-the-afghanis/
Meanwhile, the Northern Alliance has also reconstituted itself. Hard as it is to believe, but not all might be lost yet. In any case, recognizing and dealing with the Taliban is way premature right now.
We'll see if they keep it frozen forever. I suspect it's part of the Biden Administration's negotiations with the Taliban to keep them from attacking the airport until the US can get its Afghan allies out of Kabul.
"a situation that would test the most ingenious economists and policymakers" - yet what you describe is what most countries before 1900 considered normal.
American policy-makers have suffered for decades from the delusion that money alone can solve all their problems. They have wasted literally over $1 trillion USD in service of that lie. It didn't work.
So long as Afghanistan is self-sufficient for food, and it has a government that discourages foreign trade, it literally has no need for "foreign currency reserves".
5M Kabulis might beg to differ in a few months after the state ceases paying wages and hyperinflation eats their savings. Of course, if the Taliban survives for a few years, the issue will become moot, as widespread de-urbanization occurs. The problems won't be trivial, though. The Afghan population has tripled to quadrupled since the 1980s, there will be a huge refugee crisis.
Good use case for Bitcoin as an alternative store of value - less risk that people in places like this have their savings die or access to their funds denied. Highly subjective opinion of course, but that's my belief.
"Chainalysis' 2021 Global Crypto Adoption Index gives Afghanistan a rank of 20 out of the 154 countries it evaluated in terms of overall crypto adoption."
Is there sufficiently widespread internet/electricity in Afghanistan for this to be viable? And is it robust enough to survive the likely turmoil that awaits the country?
I'm much more skeptical about Bitcoin than I used to be, but can you can cryptocurrencies just fine in a place without a lot of electricity for mining, as long as there are people elsewhere in the world working to keep the chain secure.
> Access to electricity (% of population) in Afghanistan was reported at 97.7 % in 2019
> There were 7.65 million internet users in Afghanistan in January 2020. The number of internet users in Afghanistan increased by 366 thousand (+5.0%) between 2019 and 2020. Internet penetration in Afghanistan stood at 20% in January 2020.Feb 17, 2020
50% mobile phone penetration it seems
And I dunno about if it’ll survive taliban. Probably lots of reports about that available. The loss of foreign aid will hurt
> “As is always the case, the IMF is guided by the views of the international community,” IMF spokesperson Gerry Rice said in a statement Wednesday. “There is currently a lack of clarity within the international community regarding recognition of a government in Afghanistan, as a consequence of which the country cannot access the Special Drawing Rights (SDRs) or other IMF resources.”
Hilariously phrased.
> 75 percent of public spending funded by grants.
The Afghan central bank’s tenbillion USD of assets have also been frozen.
> profits from the mining sector earned the Taliban approximately $464 million” in 2020.
So their economy is ... probably screwed. You can see why bitcoin probably wouldn’t help and an id it could the US government would probably just seize it and arrest anyone who traded for it.
or more accurately an Afghan micropayments app that can support rapid and frequent transfers of sums of $.1 USD and below. Honestly I do expect at least one attempt.
I disagree with your economic theories, but that's not the main issue. Why would the Taliban cause de-urbanization? They aren't powerful enough a government to cause that; at least not without Iran, Pakistan, Tajikistan, etc. supporting rebel movements.
The Taliban do not have the fiscal capacity to support a state at the scale it was at when foreigners were pumping in subsidies equivalent to 40% of Afghan GDP. When state workers in the cities stop getting wages, they'll need to move out into the country (or abroad) to survive. Service industries that catered to them will also have to radically downsize.
There is a precedent for this. Kabul had 1.5M people in the late 1980s, by 1996 it had collapsed to 0.5M. Certainly I don't expect it to be as bad this time, but I don't see how it can not happen in the medium-term.
I kind of do, but that logic justifies long-term colonial occupation of a rather large fraction of the world - it's special pleading to deploy it only about Afghanistan.
What are some good examples of organizations learning from failure, and doing things better the next time around?
Two examples:
- I've recently been playing Skyward Sword. It's, well, disappointing, but I'm impressed by the degree to which Nintendo managed to notice the frustrating things about it (e.g. the lack of open-world feeling) and successfully turn them around by Breath of the Wild (while also successfully keeping/improving a lot of the things that did work in skyward sword).
- the US navy in WW2 seems to have consistently done this pretty well - e.g. they introduced damage control measures in their carriers after the battle of the Coral Sea that may have helped them in Midway (only one month later!)
Which of course naturally raises the question of whether, now that we're no longer in the initial post-colonial phase, we should have considered a third or fourth iteration of the governance system, maybe starting in the early 1820s when the party system was falling into crisis and re-forming.
Yes--which raises the question: Now that we're not at war with Britain any more (which IIRC was the main reason for that change--we couldn't build a military, nor pay the soldiers what we already owed them, under the Articles), would we be better off going back to the Articles of Confederation?
No. One of the other issues is that under the Articles of Confederation, there was no way for a central negotiator to bind all the constituent States to treaties (such as tariff waivers).
I mean we couldn’t pay soldiers because we couldn’t collect taxes, and we also had issues with enforcing laws. I suppose if you felt just disbanding the US into 50 different nations was good it would make sense to go back, but otherwise I would say no
The US Army has probably put more effort into learning from failure than any other organization. It's a regular part of everything they do. I don't know whether they've aggregated any of this information anywhere that's publicly available. I can say that, anecdotally, learning from failure has great benefits, but also tends to lead to being prepared for the previous war.
If the army has made learning from failure an ingrained automatic habit, then being prepared for the last war doesn't matter if the next war lasts longer than a few days.
that depends on just how frequently one is at war; maybe the USA intervenes globally often enough to get away with this (though I think it does conspicuously have this failure mode*), but for states that have managed decades of peace, you run into issues that take years to resolve, and you can't afford years against a peer military. Hell, you can't necessarily afford days in a modern conflict.
*things like guns that don't work well in desert conditions, or frankly the big issue of how to fight a non-state actor, which after decades in the ME they never figured out.
While I'm sure they learned a lot, I think the main reason they did better at the end was simply economic inertia: the Confederacy could not replace causalities and re-supply their troops at the same rate the Union could due to underlying economic factors. Sure, they learned to put aggressive generals like Sherman and Grant in charge, but Grant's march towards Richmond had the Union losing ~55,000 men to the Confederacy's ~33,000. The Union could afford to lose 55,000, the Confederacy could not.
Right, but see Grant *knew* that and used it effectively to win the war, which McClellan didn't. As they say, successful generals study logistics. Economic and manpower advantages are only enabling advantages, they only turn into military success if they are used well. It's perfectly possible to lose a war despite having a huge advantage in manpower, cf. the Treaty of Brest-Litovsk.
If you live in a city where hospital beds and ICUs are full, what exactly happens to you if you need hospital care? Covid or non-covid, both scenarios. I'm trying to understand this for Austin, Texas. Quite scary to think about.
Moving less urgent people from ICU to regular was mentioned. But in general, when system get more stressed you wait longer. Sometimes long enough to die before receiving required medical care.
In Poland ambulances were waiting in queues in front of hospitals, sometimes for several hours.
AFAIK noone tried to check/estimate so far how many people died as a result.
ICU is a designation, not a physical bed design. To some extent beds can be made ICU or not ICU. As the name implies the difference is mostly how intensive the care you get is, which is a staffing issue. The same is true for beds. The number of "beds" available does not refer to actual, physical beds but rather staffed beds capable of providing some level of care.
The fullness of ICU can be quite elastic as a result. If you look at graphs from various hospital systems you can see that often the number of beds and ICU positions goes up by a lot within the span of a couple of weeks. Moreover hospitals try and keep ICU as full as possible because otherwise you're wasting valuable staff who are just hanging around waiting for patients to arrive, instead of spreading themselves out over less intensive beds.
However the media almost never explains this, because stories about ICU being "full" are guaranteed drivers of clicks. You can find stories about overflowing hospitals in flu season for many years pre-dating 2020.
Quite possibly they move someone who is in an ICU bed that they don't need, into an ordinary hospital bed. "Need" is a rather fuzzy term here, and so long as ICU beds exist it is in everyone's(*) interest for them to be mostly-full with people who sort-of-need them. At some point, you reach a level where literally every bed is occupied by someone who will literally die if they are not in a literal ICU bed. But until you get close to that point, it's hard to know how much slack is left in the system because nobody is really motivated to track degree of "need" for ICU patients.
Once you do reach that point, the pile of dead bodies will tell you pretty clearly that you should have done something about it last week.
* Except maybe the people who are paying the bills, but in the US health care system they don't get to make any of the relevant decisions.
I think India was the only country known to have a delta outbreak before the major delta outbreaks in Europe and North America got going. So there wasn't quite enough evidence that delta was qualitatively different, until it was already happening. Of course, the US delta outbreak was clearly different from prior outbreaks by mid July, so there could have been time for some reaction.
I don't think this has been clear. There have been fewer hospitalizations and deaths recently than there have been at this point in previous waves, but most of this has been attributed to the high vaccination rate among older people, even in areas with overall low vaccination rates. There has been no consensus on whether delta itself hits individuals harder or less hard than previous variants.
Are hospital beds completely full anywhere? If ICU beds are full they just have you in a hospital bed. And I imagine if all "beds" are full there is some amount of extra beds they can give you. I don't think the count of "beds" is actually the physical number of beds but rather normal operating capacity. But yes at some point the system gets overloaded and you have to triage. Also it seems like in a lot of places the real limitation is staffing, not hospital capacity.
We've got feild hospitals set up in the parking garages of the major hospitals in Jackson, MS right now. last time that happened was Hurricane Katrina. The hospitals are, in effect, full. Beds are available in the maternity ward but there isn't staff enough to handle any more patients in the hospitals.
The limitation is certainly staffing. Regardless, the "hospital bed" count is reaching capacity in various deep-south states now-ish, and it will certainly reach it soon if nobody is willing to combat the spread of COVID.
Beyond any ethical issues which are arguable there are no legal structures to support this kind of decision making on the side of physicians. Any physician making the decision to deny life-saving care on this basis would be taking his life savings and likely his license and shredding it.
They can make decisions based on standard triage procedures, like determining which patients will benefit most from care. I don't think triaging on the basis of which patients were vaccinated would be any more legal than triaging on the basis of which patients were drinking, which patients were having unprotected sex, or which patients are citizens.
Sure it would. Vaccinated people are far more likely to have better outcomes with supportive care (which is all a hospital can do in the hypothetical overwhelmed state) than the unvaccinated, so if it comes down to triage, yeah the unvaccinated are going to be back of the line, since they're mostly likely not to make it no matter what you do.
What are you talking about? Age makes a far larger difference than vaccination status, so young and middle age unvaccinated people are going to be ahead of old vaccinated people. Two people of the same age and with the same level of respiratory distress are going to have basically the same prognosis, regardless of which one is vaccinated. It's only before people are hospitalized that vaccination status is a good predictor of outcome.
I think there are pretty good reasons not to do triage on the basis of moral condemnation of peoples' past behavior or choices. I'm guessing that nearly all of the people who refused the vaccines are people who either were genuinely misled by the people they listened to, or people who were phobic about needles or doctors or vaccines or something. Being misled on a complicated technical and risk-balancing question isn't a moral failing.
That sounds good in the abstract, but if you have a heart attack patient who clearly won't benefit from treatment (either because they're already fine, or because they're too far gone) and a covid patient who will clearly benefit from treatment, I don't think you'll find any doctor who wants to waste the effort following your guidance.
I can't help but notice that only the second of those articles notes the absolute numbers of beds in the hospitals in question, and the two listed are 15 and 25 (which doesn't have an ICU, it is so small) bed hospitals, total.
I think that if they led with "Hospital with 15 beds fills up, needs to send sick people elsewhere" the effect of the shortage would seem a little weaker. I know they are very rural, low population density regions, but still, 15 hospital beds can get filled up by a good sized highway accident, or a bus roll over.
In terms of how hospitals manage beds, in many states it is regulated either by hospital boards or the state medical boards directly. Hospitals need what are sometimes called "certificates of need" to add more beds and the like, officially to avoid excess capacity and cut throat competition. It's a straight up cartel with state backing.
There's an article in the Aug. 23 New Yorker by Joshua Rothman about rationality. Though the article didn't mean to make this point, I thought it demonstrated the limits of a supposedly objective rationality in a lack of consideration of the difference between people's individual goals and personalities. Fuller discussion at http://kalimac.blogspot.com/2021/08/rationality.html
The article is an okay overview for the unitiated, and is an amusing contrast with the one they published last year when the NYT "doxxing" drama was gaining steam. Whereas that time around they mainly focused on controversies, here they don't even mention the likes of Scott, Yudkowsky and Hanson, nor the weird ideas like EA, AI risk or NRx, and atheism is only barely hinted at, with disapproval.
I'd say that the general thrust is roughly in the right direction, with the most salient point coming near the end: "I don’t think she would have been helped much by a book about rationality. In a sense, such books are written for the already rational." Of course the logical conclusion that therefore such education should begin at childhood would be far too radical for such a milquetoast piece, but at least they have enough sense not to warn against it.
It showcases how getting numbers that tell us actual facts about the universe is much harder than is commonly imagined, and how our Big Data methods are deluding us. Gives some hope regarding AI, as it suggests we're nowhere near as close to superintelligence as some fear.
Also gets extra kudos for getting me to engage with David Chapman's work (Meaningness, In the Cells of the Eggplant, Vividness), whom I was aware of, but was sleeping on. Pretty interesting that Chapman practices Vajrayana Buddhism. His vision of the world as both charnel ground and pure land is quite haunting (https://vividness.live/charnel-ground and https://vividness.live/pure-land).
I've finally been able to put my fuzzy and vague skepticism about blockchain and NFT's into coherent words, and I call it the "Degraded Blockchain Problem"
Curious what others think. Are there any blockchain based apps that actually get around this issue?
I think for gaming the blockchain is mostly just letting you get around regulatory issues.
Imagine that these games were fully centralized and there was a marketplace where players could buy and sell items worth thousands of dollars. Then the game company becomes a broker of transfers of valuable objects and probably needs to do a bunch of stuff to ensure legal compliance (KYC / AML). If all transfers happen on the blockchain and the game company never has custody of any assets, then they likely have less legal liability.
Blockchains become really interesting when there's a reasonable possibility of competing UIs on top of the on-chain data. For instance with prediction markets, maybe with social networks, etc.
If no one entity has a monopoly on "the thing you care about", then it can work. For instance imagine an ecosystem of games that share characters and items. If one game developer severs the link between the NFTs and the in-game items, then they're cutting themselves off from this ecosystem. They're also likely violating the expectations of users in a more visible / traceable way than when normal centralized companies make changes to their games, possibly leading to more user revolt.
I don't know if anyone's actually written a blockchain game worth playing, but I think the key would be to make the information on the blockchain be something that users have to agree on to use the app together. E.g., for your "blockchain pokemon" example, the app could require you to prove that you own a particular monster on the blockchain before you can use it in a battle with another user.
Sure, you could write a new app that removes that backend requirement and allows anyone to battle with whatever Blockemon they want (like Pokemon Showdown does for the actual Pokemon franchise), but presumably the reason you're playing this game at all is because you both want to play with the game's artificial scarcity where you can be the only person on Earth who has a shiny Pikachu. If someone doesn't want to do that and wants to battle you with a team of six hacked Mewtwos, you can just... not play with them.
("You could just mod the game to do something else" is an argument that applies to every game, really, not just blockchain ones.)
It's still an extremely narrow use case - it's basically saying "the only reason to put your game on the blockchain is because your game is about playing with artificially scarce assets", which is a little circular. But I think that some MMO or CCG mechanics could fit into that box. And perhaps having it on a chain instead of on a central server could help if your game has RMT and you want to promise players that scarce assets will remain scarce? Still thinking this over.
The broader issue with blockchains is that trust is actually a good thing, so designing a system that doesn't require it isn't actually beneficial in any way. Some degree of social trust is a requirement for civilization, and the higher trust your society is, in general the better. Poor developing countries are very low-trust, the Nordics are the opposite.
The conspiratorial blockchain mindset is a perfect fit for our current global movement of populism. Institutions are bad, tear them down! Banks are, uh, bad somehow! Every element of civilization is corrupt! Ultimately this worldview ends in nihilism and never actually achieves anything (as Martin Gurri notes). It's unpopular to say institutions are actually a good thing, financial intermediaries like banks are actually a cornerstone of a functioning economy, political parties are probably good, etc. So no, removing trust and decentralizing everything doesn't really add any value, which is why blockchains have solved literally zero real world problems other than slightly improving black market payment infrastructure
The practical reason is that, in repeated real-world tests, most voters cannot identify anything about individual candidates without a party label. It's why nonpartisan/top two primaries, which have been tried for decades on the West Coast, failed to change US politics or elect more moderate candidates. Low information voters (aka most of them) lack the tools to analyze a candidate's stances, without that party label/branding. You get more celebrity/demagogue candidates without parties, and the fact that the US has the weakest parties in the developed world by a huge margin is responsible for, uh, recent celebrity demagogue politicians....
> in repeated real-world tests, most voters cannot identify anything about individual candidates without a party label
Huh? How have third parties moderately succeeded then? https://en.m.wikipedia.org/wiki/List_of_third_party_performances_in_United_States_presidential_elections . That seems like a weird bit of pop polling. Individual candidates can brand themselves as themselves, and this also seems like an argument against small parties or small parties rapidly growing to big ones, which happens a lot in other countries. And we get plenty of celeb demagogues with parties today. “Weakest parties in the developed world”? Plenty of other developed countries have some of their parties go from 50% to 20% or 10% representation, which seems weaker than Rs and D.
> low information voters lack tools to analyze candidate stances
I'm not sure what you mean by "third parties have moderately succeeded" - Jesse Venture got elected governor of Minnesota, and there have been a couple high profile Senators from Maine and Vermont and Alaska, but other than Jesse Ventura, even these "third party" candidates really had strong two-party branding (Bernie Sanders even ran for various Democratic party positions!)
"Weak party" is a technical term that means that elected representatives who are members of the party are allowed to vote their conscience instead of voting the party line. In the UK they occasionally allow this, but in the United States, you regularly have 10-20% of a party breaking with the party line on controversial votes. The lock-step voting of the Republican party in the Mitch McConnell era is a historic abnormality suggesting that the parties are greatly strengthening.
> low information voters lack tools to analyze candidate stances
Most voters get information about candidates from the candidate's TV ads, from the candidate's name, and from the party affiliation of the candidate. When it comes time to predict which way an elected representative will vote on an issue that comes up in the legislature, knowing their party affiliation is a very strong piece of information, but even in the weak party system that the US has, this is much stronger than all the other information that we, the public, have about elected officials.
Moderately high information people can judge that, say, Kyrsten Sinema and Joe Manchin are less likely to vote for the Democrats' infrastructure bill than Bernie Sanders or Chuck Schumer. But even very high information voters can't regularly predict which particular issues in a bill will be ones that particular candidates will object to.
Most voters delude themselves into thinking that they can judge better by using all the information they have than by just using the party name. But in practice, you are more accurate predicting a straight ticket line than predicting on the basis of campaign ads and felt personality.
I certainly agree with the underlying sentiment -- being both Norwegian and Texan it's really interesting to compare the ways trust works in both societies (Norway's trust is noticeably higher IMHO).
The one quibble I have is that there are certain things I would never want to trust anyone with and it's nice to have provably secure systems. My sensitive accounts, for instance. It's nice to know my passwords for those are not stored in plaintext and an employee couldn't compromise them even if they wanted to. But... I guess I am just taking that on trust because it's not like I can audit the servers. Which I guess is your point!
This is a very good point and I agree with you, but I want to comment to say that a system that requires a certain amount of trust does not *increase* trust in the system, necessarily at least. In other words, having a first currency requires society to have a certain amount of social trust, but creating a fiat currency will not automatically make people trust each other more.
Since you've actually read the bitcoin whitepaper you know more about this subject than I do. Still, I have to ask: I assume you've looked at Chainlink in depth? My impression was that this is the problem they're aiming to solve with their DONs
In that case, my hopes for the future of blockchain are:
1) where a centralized authority is trying to establish information symmetry amongst untrusted actors e.g. Helium.com
2) purely digital use cases that don't require participating in the physical realm. It seems like information symmetry has a lot of potential. I don't know what impact this will have on society. It will be interesting to see the results from Bluesky
This sounded to me like the problem of having a system contain itself, "Nobody has yet found out how to cram an entire app entirely inside the confines of a blockchain without having to connect it to an external service that gives it meaning."
this is generally the case yes; actual decentralization is a very hard problem, so almost all projects more complicated than basic transactions/trading find many ways to 'cheat' and make some parts less decentralized than they should be, but such that they are still able to convince users and investors that the cheating doesn't really count (but almost no one really cares either way most of the time because they're all making money and having fun). NFTs (and much more than just pointing to centralized URIs, although it is possible to use less-centralized URIs like IPFS) are generally a good example of this in action.
There was a piece in The New Yorker this week about methane capture from the atmosphere (specifics weren't that interesting or important).
But, METHANE SHOULD BE TALKED ABOUT MORE!! Reducing methane emissions is the BEST lever we can pull to buy ourselves more time in the fight against climate change.
Methane's big unique ability, is that, unlike CO2, we can remove it from the atmosphere just by stopping new emissions! This means we don't have to develop methane capture technology!
My impression is that in many liberal circles, methane is overemphasized.
Although it has a very strong warming effect per unit mass, this is mitigated in two ways. First, methane is worth money so, in addition to regulations against leaking, there is a financial incentive not to leak it. More importantly, though, there are atmospheric processes that destroy methane; it has a half-life of about 12 years[1]. In contrast, ~50% of CO2 remains in the atmosphere after 50 years and ~25% remains after 1000 years[2].
This can be seen by comparing the AGGI bar for methane (CH4) to CO2[3]: CO2 keeps growing, but CH4 has not increased much in recent years (though N2O has).
Having said all that, I'm open to the possibility that reducing methane is more cost-effective, especially in the short term. In the long term, I think investments in clean energy R&D are probably better because once building nuclear/wind/solar/EGS/tidal power plants becomes the new normal, opposition to carbon taxes should melt away and we will not revert back to coal/gas plants. Thus the effect of clean energy investments is mostly permanent, while the price of avoiding methane leaks is constant vigilance.
I'm not on-board with "talk about methane more". If we're concerned about short-term issues, we can just detonate some nukes inside of volcanoes. If we are (correctly) concerned about long-term issues, the only factor that needs to be considered is CO2.
"we can just detonate some nukes inside of volcanoes"
Some part of my brain is going "That would be totally awesome, WE SHOULD DO IT!!!!" while the rest of my brain is trying to restrain it because no. No, we shouldn't.
The idea of short-term geoengineering to reduce temperature is one that shouldn't be dismissed out of hand, but even then there are far more controlled and safe ways to get dust into the upper atmosphere than nuking volcanoes, even if a tube tied to a weather balloon is far less epic.
Did anyone ever figure out if iron fertilization would actually make a significant different w/r/t carbon capture? There's something about fertilizing the oceans that just appeals to me. Kind of has a 50s "We'll farm the oceans!" feel to it.
They do, but they also give off large amounts of dust and SO2 (SO2 in the atmosphere winds up as H2SO4 aerosol sooner or later). Eruptions powerful enough to inject that dust and sulphur into the stratosphere tend to have net cooling effects, at least on a timescale of years.
Probably not a good idea. Global incoming shortwave affects photosynthesis, which both partially offsets the cooling (less photosynthesis -> more CO2) and puts the world food supply in doubt.
Increasing man-made structures' albedo is safe, and direct CO2 removal is obviously safe, but aerosols have a large potential to do more harm than good.
The long term and the short term are not unconnected! Buying ourselves more time in the short-term (by reducing methane emissions, or detonating nukes if you really want to) can massively improve our long-term outcomes (more time for tech development, price decreases, capacity building, etc.).
If the vast majority (or even the consensus) of people and governments aren't concerned about the long-term, I don't believe that short-term fixes will make a hill of beans of difference.
Sure! But will it be easier (technologically, financially, politically) to radically decarbonize the economy and drawdown megatons of CO2 in 2021 or 2045?
I think you're wrong. If we can't find agreement in 2021, why will we be able to find agreement in 2045?
(and "because of various climate disasters" isn't a valid argument; I believe we are presuming that short-term actions prevent those climate disasters from happening).
We're doing much better in 2021 than we were in 2010 at finding international (and domestic) agreement about drawing down carbon emissions. I don't see any reason why 2045 wouldn't be so much better (especially since internal combustion vehicles will all be over 10 years old by that point).
The people who will be in charge then will have been born in 1995. Very different cohort than today's decision makers.
But even if you disagree with me on that, technology plays a really important role here. Look at the price of renewable energy today vs 20 years ago. Will the same happen with direct air capture? Maybe!
We should also talk more about third-world CO2 emissions from burning wood and forests, which IIRC accounts for more than half of global CO2 emissions--but I can't even find a source for this critical number now!
Consideration of CO2 from third-world countries has been eliminated from all of the scientific studies on global warming that I've read, with the claim that we shouldn't count CO2 emissions from burning wood because it's "carbon neutral", meaning we can suck that carbon back out of the atmosphere by growing more wood. Now there are even people advocating switching to wood as a fuel source because it's "carbon neutral!"
I suspect this is driven by the desire to blame global warming on high-tech industrialized countries rather than on low-tech non-industrialized ones, and by the general romantic / leftist desire to demonize technology.
The problem is that you can't say both that
(A) climate change is so urgent that we need to reduce CO2 levels within the next 30 years, and
(B) we shouldn't worry about most of the CO2 now being emitted, because it will be re-absorbed within 100 years.
• The figure that wood burning accounts for more than half of all CO₂ emissions sounds highly dubious to me. Eyeballing the first chart I could find about the topic ( https://en.wikipedia.org/wiki/File:CO2_Emissions_by_Source_Since_1880.svg ), CO₂ emissions from land use changes (deforestation) are significant but much less than half of all emissions (they look like ~15%). These are about emissions from net deforestation, so they likely don't contain emissions that are compensated for by the growth of other forests. Also, poor countries (especially ones poor enough to mostly use wood for energy) just don't use that much energy. Now, some forests are burned not for energy, but to make way for agriculture; however, this results in net deforestation, so it should be included in the above ~15% figure.
• It's not just that the forests we cut down today for energy *will* grow back during the next 100 years. It's that while we cut down one forest, other forests are growing *right now* (e.g. ones we've planted in the place of forests we've cut down in the past few decades), so net emissions from reductions in the volume of forests are less than the total emissions from burning wood.
• Environmentalists have definitely been opposing e.g. forest burning in Brazil in the past few years.
• It isn't necessarily contradictory to argue that we need to reduce the amount of CO₂ we emit during the next 30 years, and that current CO₂ emissions aren't that much of a problem if they are going to be reabsorbed on a (say) 60–100 year time frame. I don't expect global warming to be anywhere near as harmful as environmentalists tend to say nowadays; however, the more sensible environmentalist argument isn't that the current emissions will lead to huge problems within 30 years; it's that the current and continuing emissions will lead to huge problems on longer time scales (perhaps 60–100 year). It's not realistic to expect that we will continue the current level of fossil fuel use for 30 years, and then abruptly reduce it to 0. Rather, it can be expected that we will continue to use some amount of fossil fuels for many decades (especially so if we don't take steps towards eliminating their use during the next few decades). Say we want to keep the total net CO₂ emissions over the next 100 years below a certain level (say, one leading to ≤2°C warming). We need to choose a trajectory. Likely the only feasible trajectory achieving that is to significantly reduce fossil fuel use already during the next few decades, and keep using a low and decreasing amount after that. At the same time, since we are looking at emissions on a 100 year time scale, burning wood that will grow back within 100 years is indeed net neutral in this calculation.
My point was that being "net neutral" doesn't help us at all until the year ~2100, and people are worried about what will happen before then. Reducing carbon from burning wood doesn't give permanent benefits, but gives just as much benefit as reducing carbon from other sources for the first hundred-or-whatever years, and we claim to be concerned about those years.
Reducing carbon from burning wood would be technologically easy. Just give out a billion Franklin stoves. It's still not cheap, but it could easily be more cost-effective than insulating houses, which is part of the current plan.
Who are "we"? Again, IMO the more sensible concerns about global warming are the long-term ones; claims that it's going to be catastrophic within a few decades are definitely bullshit or hyperbole (but that doesn't necessarily imply that concerns about the longer term are also unfounded).
>Say we want to keep the total net CO₂ emissions over the next 100 years below a certain level (say, one leading to ≤2°C warming). We need to choose a trajectory. Likely the only feasible trajectory achieving that is to significantly reduce fossil fuel use already during the next few decades, and keep using a low and decreasing amount after that. At the same time, since we are looking at emissions on a 100 year time scale, burning wood that will grow back within 100 years is indeed net neutral in this calculation.
The current IPCC scenario for "just under 2 degrees at 2100" is "halved CO2 emissions by 2050, net-zero worldwide by 2075, -20% of current production after that i.e. large active capture". Their scenario for "just under 1.5 degrees at 2100" is "zero net CO2 emissions after right now, achieved by net-zero worldwide at 2050 and massive active capture afterward".
I recall reading that cow burps are (one of) the greatest source(s) of atmospheric methane and that adding seaweed to their diet vastly reduces that, but I don't know if there have been any developments in that area. I suspect that, since farmers do not benefit from eliminating methane emissions, this might have to be legislated to make a difference.
Ha! You have revealed my bias here. I am the founder of a company (www.alga.bio) trying to make that seaweed in the lab so we can dramatically scale up the use of it. (The secret is that farmers *do* have an incentive to adopt it.
I'm not sure I totally understand your point. If methane emissions stay constant, methane concentrations in the atmosphere stay fairly constant as well. But if we cut emissions, concentrations will drop.
In the short term (20 years), methane has about 100x the warming potential of CO2, which means we could REALLY slow warming by reducing the concentration of methane in the atmosphere.
My point is that since methane is eliminated from the atmosphere in a few decades, emitting less methane this year leads to lower temperatures for the next few decades compared to the alternative where we don't reduce emissions this year; however, if we look at temperatures more than a few decades into the future, our methane emissions this year don't affect them.
I don't expect global warming to cause serious problems in the next few decades, even if it may cause serious problems farther into the future. Under these assumptions, there is little point in prioritizing reducing methane emissions today, if it's only likely to help during a period when the warming so far isn't a major problem yet, and won't help when (due to continued CO₂ emissions and continued warming) it may become a major problem. It will be worth cutting methane emissions when (if) we are at a point where the warming is expected to cause serious problems within a few decades.
> if we look at temperatures more than a few decades into the future, our methane emissions this year don't affect them.
This isn't right. Temperature a few decades in the future is shaped by all the net energy flows into the earth over the next few decades. If the present decade has greater net energy in-flow, then temperature a few decades from now will be higher than if the present decade has lower net energy in-flow.
Temperature a century from now = temperature currently, plus the net energy in-flow from each of the ten decades. If we can get the current decade lower and keep all the other decades the same, then we keep temperature a century from now from being as high as it would otherwise.
Doesn't work that way. The energy flow in and out of the Earth's surface is balanced each and every year. But different compositions of the atmosphere result in a different temperature being required to match the energy out with energy in.
I'm surprised at the claim that the energy flow in and out is balanced each and every year - I would have thought it could easily take years or decades to reach the new equilibrium temperature, when there's a change in atmospheric transparency at different wavelengths, or a change in surface albedo, or anything like that.
I'm under the impression that any given composition of the atmosphere leads to a certain equilibrium temperature. With a given composition of the atmosphere, a warmer Earth radiates more heat out into space. So it's not like any extra energy inflow will stay with with us forever as extra heat.
Now, I don't know how long it takes to reach that equilibrium temperature; perhaps it takes a long time. However, even then, assuming that the current level of greenhouse effect (even including methane) is lower than the level we will eventually reach due to continuing CO₂ emissions even assuming a relatively low trajectory of future emissions, the current methane emissions just make us reach the eventual equilibrium slightly faster; they neither increase the eventual equilibrium, nor make us reach a higher peak of temperatures than the eventual equilibrium.
What are the chances that, if God exists, he practices Bayesian reasoning? Does a perfect God update God’s priors? Or does a perfect God have perfect priors to begin with?
Memo from the Irony Department: Did you know that first use of Bayesian reasoning was a proof that God exists? After Rev. Thomas Bayes died, his good friend Rev. Richard Price (also a mathematician) went through Bayes’s papers and found the unpublished essay on the "doctrine of chances", and Price had it published. The good Reverend Price then used the late Reverend Bayes doctrine of chances to formulate a proof for the existence of God. David Hume who had published his _Of Miracles_, which argued against the existence of miracles (and thus indirectly argued against the existence of God), wrote Price a nice letter saying that he found Price's argument to be most ingenious. I don't think Hume was convinced by Price's proof, though.
TLDR: Zohar, your question above and then running across this tidbit of information about the Bayesian proof of God started me thinking. As an agnostic I wondered if one could use Bayesian reasoning define the *type* of God that *might* exist with the given priors that we now know about the universe. Since you have a kool Kabbalist handle, Zohar, let's call this hypothetical god-entity Ein Sof ("the Unending") to honor the Kabbalists who were the first to de-Yahwehnize the concept God into something much more abstract.
Assumption: The Ein Sof god entity would be interested in a universe with physical constants that would be amenable to development of life. Unknown: if there might be other emergent phenomena that Ein Sof would be interested in (see item 1).
1. There's a high probability that this Ein Sof entity would be OK with emergent phenomena. And, considering the physical constants of the universe that we exist in, there’s a high likelihood that Ein Sof may be interested in a universe that could support the development of life. NB: despite the belief/claims of some Physicists that all the all the interactions of particles at the level quantum level of behavior have been pre-determined since the Big Bang, there’s no way to predict from currently known physical laws the emergent phenomena we’ve seen (Standard Model chemistry life consciousness). Therefore, since the laws of emergent phenomena don’t seem to be baked into the constants of our Universe, it’s likely that Ein Sof is interested in a certain level of randomness balanced against organizing attractors.
1a. QUESTION TO CONSIDER: Are there other emergent phenomena that have not yet manifested themselves in our universe, that we can’t conceive of, but that Ein Sof might be interested in seeing emerge? E.g. I’m put in mind of Carl Sagan’s quip that the universe is evolving toward the emergence of God. Could Ein Sof be trying to create a universe that will become aware of itself and functionally becoming another Ein Sof type entity?
2. What does Ein Sof know and how and when does Ein Sof know it?
2a. Assuming this God-entity cared about life, Ein Sof would need to know beforehand the values for the 26 dimensionless constants that create a universe where life can evolve, Ein Sof would have had to have acquired this knowledge somehow. One avenue is through omniscience. This seems unlikely though given the priors we see in our current universe (See item 3, but an item 4b scenario would work in favor of omniscience).
2b. Assuming Ein Sof did not know ab initio the ideal constants to encourage life, this entity would probably need to run experiments to get the correct recipe for the ideal dimensionless constants to promote life. In which case, this universe may be one of many experiments that Ein Sof has run, is running, or has yet to run.
3. Assuming that Ein Sof is interested in emergent phenomena, Ein Sof is probably not omniscient. If Ein Sof *were* omniscient, that would probably mean that all the phenomena of emergence are just an illusion. And likewise, Ein Sof wouldn’t need to create a universe if it already knew what would happen (and more importantly Ein Sof wouldn’t need to create a universe where some physicists need to believe that all particle interactions are non-random and pre-determined). Therefore, Ein Sof may not know the endgame for this universe; Ein Sof may not know its own ends; and Ein Sof may not be able to explain or prove its own existence, and it’s through the externalities of universe building that it’s trying to understand itself (see item 4a and item 5).
4. Given that the Yahweh theory of God that the ancient Israelites believed in (and that contemporary atheists use as a theist stalking horse) is almost certainly ruled out by our current understanding of the universe. Remaining theories of God which may or may not define Ein Sof:
4a. The universe is simulation run on giant computer. This is updated version of the of 18th Century Watchmaker Hypothesis that several modern Physicists (plus Elon Musk) have suggested. Supposedly there’s a mathematical proof that this scenario is impossible. I haven’t seen it, and I probably couldn’t understand it if I did see it, so I’ll leave that question as open. But if the universe is a giant computer simulations, we’re facing the Turtles All the Way Down problem, because that would mean Ein Sof is a computer programmer sitting in universe external to us. Where did that universe come from?
4b. The universe is the mind of God. This is an old hypothesis put forth by mystics in various schools of thought over the past two and a half millennia. If so, this raises some other interesting questions, such as: can Ein Sof observe the workings of its own mind? Are we the sensory apparatus of Ein Sof? Can Ein Sof explain itself to itself? And we face another Turtles All the Way Down problem of who or what created Ein Sof. Likewise, we may just be Ein Sof omnisciently thinking out how the universe is going to evolve. So, under the Mind of God hypothesis, the Ein Sof is very likely omniscient, but it’s too lazy to bother to build a universe.
4c. There’s no reason that there need be a single Ein Sof, there could also be Multi Sofs.
5. Under the Carl Sagan Quip Hypotheses that the universe is evolving into god (see item 1a), Ein Sof may not exist yet, but will exist in the future. In which case we might be the equivalent of Pre-Cambrian eukaryotes dreaming of our unknown descendant(s). But it would be kind of sucky for Ein Sof if it evolved just in time see the heat death of the universe — q.v. existential angst.
I hope you enjoyed this speculative romp. Remember, adding a God-entity to the universal equation violates Occam’s Razor. But for Occam’s Razor to work, it requires that there be a simpler explanation. Unfortunately, we don’t have any testable hypothesis for the existence of the universe (and I doubt if we ever will). If we remove the God-entity, we have to envision some meta-laws which would create the laws that this universe is based on, and those meta laws would have to matrix in which to work. So, either way we still are faced with the Turtles All the Way Down Problem.
What are gods priors on god having priors? His priors on what his priors are? His priors on what priors are? His priors that Bayesian reasoning is stupid?
I'd say that when the God will be created, They will use Bayesian reasoning with more than 80% probability. Maybe later they will figure out something better.
1 Corinthians 13 gives some hints about knowledge and God from a Christian theological perspective.
"Love never fails. But where there are prophecies, they will cease; where there are tongues, they will be stilled; where there is knowledge, it will pass away. For we know in part and we prophecy in part, but when completeness comes, what is in part disappears. When I was a child, I talked like a child, I thought like a child, I reasoned like a child. When I became a man, I put childish ways behind me. For now we see only a reflection as in a mirror; then we shall see face to face. Now I know in part; then I shall know fully, even as I am fully known."
Paul expects that in heaven he'll have something more like the kind of mental faculties God has. And he's not even willing to use the word "knowledge" to describe those faculties without a few sentences of qualification. What we call "knowledge" is partial, like how a child's babbling resembles language or how a mirror's reflection (hammered bronze, at the time, very different from our smooth silvered glass planes) resembles the reflected object.
So, I think Paul would tell you that the answer is no. What God does is very unlike the thing we do called "reasoning" or "knowledge". That's a messy, complex, imprecise process that introduces all sorts of distortions. It's a partial, imperfect attempt to do something else, which is very different, and which is what God does.
Interesting you should ask. The phrase that Yahweh used when he spoke to Moses was אֶהְיֶה אֲשֶׁר אֶהְיֶה — which English-speaking theologians have translated as "I am who I am", but that entity's words can also be translated as "I will become what I choose to become". The medieval Jewish Kabbalists had a vision of Ein Sof as a dynamic force pouring creative energy into the cups of the ten Sefirot, which in turn overflowed into the physical and spiritual world (I may be oversimplifying, though). Whether Ein Sof has personality and intelligence and is omniscient aren't really questions that seem to be addressed in what little I've read. But the Kabbalistic world view seems to be totally down with emergent phenomena. So Ein Sof could be tuning the universe via Bayesian-like process.
Whenever I stumble across this sort of information about the Yahweh godhead I always walk away wondering why current Christianity seems so fucking devoid of well, anything. We've ended up with this total Chad of a monotheistic god and it blows. I might still be a Believer if there was room for a "becoming" god.
This feels like you and I aren't looking at the same Christianity. The fact that Christianity has, in Christ, a dynamic, becoming God is what puts it ahead of the other monotheistic religions, to my thinking. Christmas is about God becoming man. Good Friday is about The Living God becoming dead. Easter is about the dead Christ becoming alive again.
I haven't read Hegel in the original, just critiques and summaries of Hegel. But Hegel seemed to be concerned about God being something that was in a state of becoming. But didn't he have something about God dying to become Jesus and Jesus dying to become God? If he lived a few centuries earlier, he would have been burned at the stake for writing that.
I think omniscience is pretty much isomorphic to having a prior of 100% in favor of every true proposition. So no update could ever be needed (or possible).
The traditional Catholicy, non-Calvinist Protestanty, Orthodoxy, (I'm adding -y because I'm going to butcher the specifics here) way of thinking about it is this: just because God knows what you will choose doesn't mean you weren't free to choose it. It just means he already knows what your choice will be, because he is equally present in past, present, and future.
To think of it another way: I know I chose to eat Arby's for lunch today. I know it because it happened, and is in the past. Does the fact that I know what choice I made in the past mean that it wasn't an actual choice? Of course not. Does the fact that I cannot change that choice now, because it already happened, mean that I couldn't have chosen something different then? Most would say no. Well, God is in the same position that someone would be if they existed at the end of time and knew everything that had happened in history: such a person could exist without disproving free will, just as you knowing what choice you made yesterday doesn't disprove your free will. Since God is "outside of time", so to speak, he does exist in such a position and we shouldn't sweat it over whether we don't have free will just because he already knows what we did/are doing/will do.
OK then, in what sense does God have free will? Doesn't He need it to create the universe and all the people in it? If not...then the entire Intelligent Design argument, which is predicated on the *necessity* for an independently-choosing intelligence, falls to the ground. If God is compelled by perfect rationality, or perfect foreknowledge, then He is no more intelligent chooser of consequences than is the law of gravity "choosing" to make the apple fall down instead of up.
Hoo boy that's a tough one. I'm just a layman, and that's more of a *Summa Theologica* type question. From what I understand, God is free to be who he is. I Am what I Am, and all that.
Let me put it this way (probably badly): lets say you encountered a machine with two buttons. One button will give you something you want (lets say, $10,000) and the other button cause a rubber mallet to whack you in the kneecap. And lets say before you push a button you learn, with 100% confidence, which button is which. Well, in one sense you could argue that you aren't really free to choose which button any more: you'll obviously choose the $10,000 button. Yet in another sense you still have free will because you can know exercise your will perfectly. If you didn't know which button is which, your will and your actions might work at cross purposes: you will the $10,000 button, but in your ignorance you press the mallet button. So does learning which button is which remove your choice, or empower you to choose correctly?
Similarly, God wills a particular end to, well, everything, and acts in perfect accordance with acquiring that end because he has perfect knowledge and perfect power. In one sense you could say he is not free, because he will always choose to do that which is in accordance with his will. But that's a hollow kind of freedom, the freedom only to make mistakes, to frustrate your will.
Depends on which Abrahamic God you're talking about. Actually, I think only modern Christian sects go with the full omniscience jive while at the same time separating God from the rest of the universe. I guess (some) Sufis also see Allah as omniscient, but he's omniscient through all living beings, since we're all emanations of the godhead — I've sort of internalized it as we're all little neurons in God's brain.
Can you find any references to back up this "only modern Christian sects" claim? This sounds like the sort of thing that I should be able to find at least implicitly in the Summa Theologica...
The fact that Aquinas expends so much effort on Question 14 (the question of God's omniscience) suggests that there was significant debate on this issue among medieval scholastics (at least in the 13th Century). Yes, I suspect that most modern Christian sects derive their arguments for God's omniscience either directly or indirectlyr from Aquinas — directly as in an agreement with his logic or indirectly as a response in disagreement to his logic. But old "Dumb Ox" Aquinas had too much meat on his arguments for Christian theologians to ignore him. It took a few centuries and a bunch of Popes before Aquinas was accepted as holding the "correct" views by the Catholic Church. And even though (I gather) most Protestant sects regard Aquinas with a jaundiced eye — being that his arguments were contaminated by "pagan" thought (i.e. Aquinas depended too much on Aristotle's reasoning methods) — I don't think I'm wrong when I say most modern Christian sects seem to be on board with his basic claim of God's omniscience.
Doesn't Aquinas *also* spend a lot of effort on proving that God exists? There was a lot of debate about proofs of the existence of God, but there were very few, if any, participants in these discussions that had significant doubts about the existence of God. So I don't think you can use the existence of lengthy arguments as proof that the majority of the community disputed the conclusions of those arguments.
(Besides, many of the classic proofs that date back to well before Aquinas lead directly to a God with the three-O's of omniscience, omnipotence, and omnibenevolence.)
What I don't understand is why people seem to have trouble with the idea that religions are not static systems of thought. Religions evolve. And the reason they evolve is because co-religionists argue amongst themselves about the meanings and implications of their beliefs.
But I never said the *majority* disputed the conclusions. I just said there was "significant debate". Significant, in that some people were pissed off enough to burn those with the minority opinion at the stake for disagreeing with dominant dogma.
Boethius and St. Augustine argued against the First O using the argument the omniscience precluded human free will. The only reason I know this is because I had to write a term paper on the arguments for free will a long long time ago in college far far away. I'm not sure about the history of the Second and Third O, though. And just doing some quick Googling, it looks like there are a bunch of contemporary theologians who are uncomfortable with the omniscience argument. You'd think I was threatening certain people's core beliefs or something...
But if at the time there was "significant debate" about the matter, then that must mean there was a significant fraction of believers/theologians who did hold the now-prevalent view on omniscience! In that case, wouldn't your original claim be more accurately phrased as this view only being dominant/prevalent/universal in modern Christianity, rather than suggesting it was (mostly?) absent in more ancient times?
Good point. Except that I won't go as far as to say it was always absent in ancient times. At some point the rabbis, pre-rabbis, or the priests started wondering about the nature of omniscience. I don't know when. It may have been as late as Third Century BCE with influence of the Greeks and their logical reasoning. I don't think there was necessarily agreement until the pro-omniscient schools of thought became the dominant dogma. It was being debated as late as Tenth Century CE by rabbinic schools of the law vs mystical schools (Maimonides isn't preaching to the choir, he's preaching to his philosophical opponents). Probably this continued as the Kabbalist schools developed in the late medieval period. Likewise, the Arab philosophers were debating the question (as late as the Eleventh Century). In fact one Sufi philosopher (whose name escapes me right now) actually suggested that our consciousness was part of Allah's larger consciousness — and thus being part of god we *were* God. That didn't go over very well with the legalist schools. And as I said above Thomas Aquinas's discourse on the problem of omniscience was probably directed at scholastics who had differing views of the matter.
No, this is a core tenet of classical theism, so in Christianity it dates back to the early church fathers at minimum: Clement of Alexandria, for one. And it continues through history as an understanding of every mainstream sect, so you get folks as far apart as Augustine, Aquinas, and the Westminster Divines all supporting it.
That said, though you won’t find the formal word ‘omniscience,’ the better argument is that both Testaments are quite clear on this point: the Psalms for example insist repeatedly that God ‘knows all things,’ that His ‘understanding is infinite.’ Likewise the Gospels ascribe to God knowledge of even very trivial things, as well as knowledge of even things in the heart that men don’t know about themselves. The epistles also say and imply that God knows everything quite a bit.
It may the core tenet of the contemporary Catholic theism, but I think you're mistaking the Christianity of today as the only form of Christianity that's ever been.
Be that as it may, all the way back in the Garden, it's clear that Elohim/Yahweh was not all-knowing. Genesis 3:9 - But the Lord God called to the man, “Where are you?” So Elohim didn't know where Adam was hiding. And then in Genesis 3:11 - And he said, “Who told you that you were naked? Have you eaten from the tree that I commanded you not to eat from?” So, we see that the LORD didn't see dreadful apple eating event go down.
Judaism before Christianity had no fixation on Yahweh being omniscient and omnipotent. The rabbinical mystics of the Merkavah traditions had the myth of Enoch being carried up the Heaven (or rather down to Heaven, because in the Merkavah school of though the Earth floats above Heaven). Enoch sat on the winged chariot throne of Yahweh and Enoch was transformed into the Angel Metatron, who became Yahweh's Mini-Me. Yahweh couldn't be everywhere at once or see everything going on everywhere, so he delegated to Metatron. More orthodox rabbinic schools discouraged this belief (labeling it heresy), because smacked of polytheism — but questions about Yahweh's omniscience must have lingered up through the 10th Century, because Maimonides spends a lot of time in his _Guide for the Perplexed_ arguing for God's omniscience. The fact that he's arguing this point implies that there were schools of thought that disagreed with his position.
And there are threads of gnostic dualism interwoven into various Christian sects up through the 14th Century. For instance, the Cathars believed that evil was independent actor from God, and that God had no control over what evil did and evil could conceal its actions from God. The Catholic Church spent a lot of time inveigling against dualism (and burning dualists at the stake), but even today many Christian sects still see the Devil as an independent actor (with all sorts of theological gymnastics to explain how the Devil can be an independent actor — i.e. not of God — but still be part of God's plan (which makes God's omniscience seem sort of intermittent).
Much of this early textual evidence you are appealing to describes a tradition that is clearly not monotheistic in the modern sense of having a single creator God that is in charge of all existence, but rather having many deities, each significant to a particular tribe. It's no surprise that *those* traditions would have different views on omniscience than truly monotheistic Judaism and its successor religions.
You are basing an entire theology on the very slim reed of believing that God doesn’t ask rhetorical questions, when it is obvious from Job that in fact He does and it seems to be one of His favorite things.
“Judaism before Christianity” I make no argument about Judaism outside of Christianity, but note that good-faith arguments about religions should probably start with mainstream beliefs and work outward rather than starting with the fringe mystics and their apocryphal writings and then never arrive at orthodox thought.
“Catholic tenet” I assume you’re not ignoring my points, you just don’t have the background in Christianity to catch them, but the Westminster Divines are 100% Protestants. Far from being a Catholic tenet, it’s actually held *more* strongly by the Reformers. And as I alluded before, RCs are only one inheritor of the early church fathers. You e also got to reckon with the Eastern Orthodox, the N. African church, etc., who all espouse divine omniscience. This does not seem like a winning position.
“Many Christian sects still see the devil as an independent actor” This seems like a misunderstanding: even Arminians believe that God has foreknowledge of the choices we make, he just is not the cause of them. If you genuinely believe that Satan is independent from God, you’re outside of the theologically orthodox Christian tradition and a pretty rare bird besides that: y’know, a perfect match for the Cathars. I’m just you could find some more historical heretics that believe this, but given your original comment that would be moving the goalposts pretty significantly.
Good faith arguments about religion shouldn't assume all co-religionists of a certain class have uniform views. You're claiming there's a orthodoxy of thought that always existed where it may only tenuously existed. It's like atheists using the Bearded Yahweh throwing thunderbolts as stalking horse for God. I may not be as knowledgeable about the ins-and-outs Christian Orthodoxy as you, but I think it was Boethius who first posed the question that If God omniscient it implies that humans have no free will. I may be mangling his argument, but if God knows the past, future like he knows the present, it is difficult to see how humans have any agency. Certainly, my Puritan ancestors took that as a given that we had no agency, and nothing we could do would change whether God had decided we were damned of saved.
You've forced me dust off my old Aquinas. Holy Yahweh he can be a tedious read! But if I recall my history, Aquinas got into trouble for some of the conclusions he put made about God's omniscience in Question 14 — he made God's omniscience to be *too* inflexible. It took a few centuries for the Church to come around to his views — just in time for the Reformation.
Anyway, I don't find your arguments convincing because (a) they are based mostly on how the Church (however you want to define it) sees itself today, and because (b) contemporary Christianity has a selective memory of past events, and they ignore all the unpleasant arguments went down between medieval scholastics about God's omniscience. Likewise they brush under the theological carpet all the alternative Christian belief systems that were out there but were branded heretical by the establishment theocrats of the time (and therefore we should think no more about them, perish the thought!).
This assumes the ancient Israelites needed to claim that the words of Yahweh/Elohim were a rhetorical device to make the Eden myth consistent with their beliefs. I would argue that as a band of bronze-age goat herders the ancient Israelites hadn't gotten around to considering the question of Yahweh's omniscience, yet.
My point is that religions evolve. The Judaism practiced today is different from the way that religion was practiced before the Diaspora, and the religion of the Jews under Roman rule was different from the way the ancient Israelites practiced their religion.
I had to google St. Francis of Assisi - which then took me to Loyola/Jesuit theological constructs. "recognize, like Jesuit Gerard Manley Hopkins, that “the world is charged with the grandeur of God”—the positive, energetic and engaged vision of God's constant interaction with creation" - of course finding one's theology in poetry only is a limitation. And there would be a distinction between "God is the world" versus "God in the world/spirit in the world." Being charged with the grandeur of it might not be identical to being it. But if God is always and only separate from the world, all sorts of irritating sequelae appear; where exactly is the boundary? Measured in what units? There seems to me to be a distinction between "spirit" ie whatever is pouring, and an omniscient being/higher intelligence - but where would that line be? It usually devolves into paradox, which to me means that human reason is having trouble with it and there's some other phenomena going on.
Approaching it can be interesting though. If God doesn't span time, when did God pop in and decide to stay? If God doesn't span space, where can we go that is outside God? Are there higher-order beings/intelligences which interact with us but which are not "God" per se?
I need to know more about Christian theology than I do. Intermittently I have suspected that one of the major motivations for a priesthood, historically, was that political leaders realized that anyone with the patience and interest in debating philosophy would engineer a doomsday machine eventually if left to their own devices. So part of the point of religious practice was to keep the brains busy and out of the way (not the opiate of the masses per se, the opiate of the earlier-stage technical class). That being said, I think spirituality is an important element of life, and that perhaps some of the brains were kept a little too busy and lost touch with what they were trying to describe.
Sometimes I feel uncomfortable about borrowing parts of Christianity and meshing it with quasi-animist handwaving. Then I say, no, that's a very Christian experience, contemplating and discovering again and again what a "one God" construct would be. There's a "history + philosophy" aspect of doctrine in which some of it seems to be due to history or political choices, and some of it seems to be observations on the nature of reality. I don't know why the early Christians jettisoned reincarnation, for example, or women as religious leaders.
But having God relegated to an upstairs room, coming down occasionally to yell at the family or demand dinner, makes possible all kinds of great stories about human nature.
The obvious retort to this is some statement of compatibilism. It certainly doesn't seem obvious to me that free will can't be compatible with one's choices being subject to deterministically knowable. For example, if I know my friend is thirsty and I offer him water, I know he'll accept but it doesn't feel like his decision is not free in any meaningful sense.
I suppose ultimately this is basically a semantic discussion, but I find it hard not to side with the compatibilist view when I consider that, in a Universe fully described by (deterministic) General Relativity plus the actions of a "pseudo-free" agent who always makes truly random choices, everything is still completely knowable to a God who can observe the entirety of the four-dimensional Universe from "outside".
This week there was a cool new gene therapy paper that used human proteins (instead of viruses) to package mRNA. So far it's only been tested in cell culture but in my opinion it's very promising. I wrote about it here: https://denovo.substack.com/p/mrna-delivery-gone-non-viral
I've also continued my human herpesvirus series; this week's post is about varicella-zoster virus (which causes chickenpox and shingles). It's the only human herpesvirus for which effective vaccines exist. https://denovo.substack.com/p/varicella-zoster-virus-a-rare-success
Do we have any of Neumann's remains left for a sequencing? (I imagine the Manhattan Project members had regular blood work done, maybe some got stored?)
Any guesses of what would happen if an egg and a sperm both got rewritten with the relevant SNPs and other interesting alleles?
Neumann was buried normally in accordance with his Roman Catholicism (cremation still being look on askance). I've never read of any known biological samples surviving the way you do for Einstein. (For all his nuclear-related work, he was not much involved close-up - he was a mathematician, working on computing - so he would be a somewhat improbable candidate for sampling, although people always suggest that the cancer might've been related.) As far as partial overwriting goes, you'd get a lot of regression to the mean under the additive model, and under the emergenesis model, you'd be utterly disappointed. (Remember: he had a daughter and two grandchildren. So 50% and 25%. His daughter is accomplished but you probably haven't heard of her: https://en.wikipedia.org/wiki/Marina_von_Neumann_Whitman The grandchildren don't even rate WP entries.) That's why people shoot the breeze about full blown cloning: minimize regression, and preserve any interactions or emergenesis-like nonlinearities.
The ancient DNA people (eg the Reich lab) go for the inner ear bones, of all things. Turns out to be the best place for preserving DNA intact even over hundreds of thousands of years. ("I [petrous bone]t is one of the densest bones in the body." - who knew?!)
Being a dental school professor and an assistant professor is pretty good, but if you had invested hundreds of millions of dollars and multiple felonies into raiding von Neumann's grave to genetically engineering & part-clone one of the greatest mathematicians in history expecting breakthroughs like his, then I think you would be extremely disappointed. Going from von Neumann to... them, seems like plenty of regression to the mean to me.
Note the assortative mating and family environment means that the regression there may not be as much as you'd predict.
Cloning Bentham is probably going to be harder because descriptions of his 'auto-icon' sound like there's not that much left and the mummification/preservation process will be much harsher on DNA over 189 years than von Neumann's buried body over 64 years. I don't think there would be much point, though, compared to von Neumann: Bentham had an impact more because of his ideas and his personality entertaining ideas others simply refused to entertain, but despite a bit of child prodigality, he never struck me as important due to raw abilities of the sort you might expect to be highly heritable. He was a bullet biter, but we have plenty of them around today.
Sequencing dust: If you couldn't use the body, then yeah, it's hypothetically possible to sequence ambient traces instead ( https://www.gwern.net/Embryo-selection#glue-robbers-sequencing-nobelists-using-collectible-letters ). Humans shed astonishing amounts of DNA, and DNA sequencing has gotten astonishingly sensitive. It is not at all out of the question to recover DNA from things like licked stamps, and you could likely find some correspondence von Neuman licked himself, and there you go. I was amused by this idea so I did a bit of looking a while ago, and collecting letters/stamps is surprisingly affordable, and there's no 'copyright' or 'patent' which their descendants might inherit or 'privacy right', because the cells/DNA are considered property which they disposed of and you have purchased in full. If you have a stamp that, say, Albert Einstein licked (would cost only a few thousand dollars), there's no reason at all you couldn't just go sequence it with forensic-level genome sequencing and just post it online... So I proposed that you could invest something like $100k into buying up memorabilia from the likes of von Neumann as a de facto, implicit DNA biobank of extraordinary individuals - none of this appears to be priced into current collectibles. :)
I don’t really want to hyperstition this into happening, but would one rather have 100 JvNs or 1 JvN plus 99 other extremely intelligent historical mathematicians/scientists/philosophers etc of your choice? And could one offer JvN or the mixed philosopher sperm/eggs to everyday people who want to have kids with them - more assistant professors can’t hurt
I would go with the latter. There is presumably some level of diminishing returns from JvNs running around, some diversity of phenotype is likely to help reduce each others' diminishing returns from unblocking each others' bottlenecks, you have no guarantee of replicating the X-factor or even that the X-factor is genetic to begin with so putting a lot of eggs into one basket, and the latter will have much more research value/VoI. I'm still not too sure what to think of the lack of eminent twin pairs: twins come with a bit of a biological penalty so there's a tail effect which I'm not sure if it's important or not, and there may also be another log-normal/pipeline/emergenesis-like illusion going on with identical twins coordinating very effectively but simply not sharing identical rage for mastery or idiosyncratic 'special interests' to a sufficiently high probability to show up (although this would not be an issue with enough clones as with enough of them, eventually special interests would collide - birthday paradox).
Given that DNA does degrade with time, would this not be a genuinely worthwhile investment towards he day when cloning tech is advanced enough to make use of the genomes? 6 figures seems extremely cheap given the potential upside
Yeah, that was kind of my thinking. Cloning, sequencing to run GWASes on or to upload to geneologies, proteomics, studying their health from the residues, plenty of uses, institutions holding existing letters sure as heck won't let you do it, the collectibles are ridiculously cheap and can be stored in a tiny space for cheap, $100k barely buys you anything in other fields, why not?
But you have to admit, it's a super-weird thing to do, it wouldn't pay off for decades, I don't happen to be a billionaire who could casually punt $100k on something so speculative, and there's a good counter-argument that given the timescale and how many collectibles are floating around, there's no point in doing it *now* - even when it becomes totally normalized to sequence some famous person from a stamp or hairbrush, there'll probably be enough floating around (especially if you aren't too picky about which people) that you can buy whatever you need for cheap then too.
I don't really think we have a lot of regression to the mean going on here....unless Neumann was so phenomenally smart that despite two generations of regression to the mean, his grandchildren are Harvard and Yale professors
There was one physicist who said something to the effect "von Neumann was demigod, but he could imitate humans almost perfectly." Can't find the quote now or who said it. But it stuck in my head.
There are a hundred thousand top professors at top colleges if you’re willing to spread out a bit. There’s only one von Neumann, and maybe (?) hundreds of people who demonstrated his caliber. Doesn’t quite compare.
Heavens, no. Harvard has less than 1000 tenured faculty, and less than 300 in the schools of science and engineering. If you gathered all the science and engineering faculty in the top 20 universities, I think you'd have about 5,000-8,000 people total.
Professor in general, not science professor, should’ve specified! I picked the larger reference class for better comparison. His daughter, Marina von Neumann Whitman, is professor of business administration and public policy at the university of Michigan
Mind you, if you restrict yourself to peer institutions of Harvard and Yale, that might be about 5-10 universities, depending on the field, and so about 1500-2500 people.
To the last: yes, he was that phenomenally smart. John von Neumann is a very good candidate for the smartest person who ever lived. From Wikipedia: "Nobel Laureate Hans Bethe said 'I have sometimes wondered whether a brain like von Neumann's does not indicate a species superior to that of man'". Read the rest of the "Cognitive Abilities" section of his Wikipedia page—it's quite something.
I have recently concluded, on the basis of a bunch of reading, that animal foods are basically bad for us. Or at least bad for people like me who are at high risk of a lot chronic diseases. I'm sort of confused by how I could have followed nutritional news pretty closely for years and years without having found this out before. Or maybe I'm confused now. Anyway, that's what's on my mind this week.
Hm, I've come to the opposite conclusion. I ate very little meat the first 30 years of my life and was constantly tired and had a lot of digestive problems. Switched to a lot more meat and my health and weight and energy are vastly improved.
I think people just like the "idea" that animal products are bad because...well, it's gross and mean and requires killing animals. Much nicer to think of just eating things that are alive because of photosynthesis than eating flesh or dairy. But in reality, plant matter is much, MUCH more difficult to digest, 95% of it is actually toxic to humans, it gives you gas, and it's very difficult to get all vitamins and minerals from plants. Meat (1) is way easier to digest, (2) is never toxic when unspoiled, and (3) has all the vitamins and minerals you need.
So I'm not buying it. The more you feed dogs meat and not fruits and vegetables, the less gas and puking and digestive issues they have. We're omnivores just like them, and don't have the long digestive systems and multiple stomachs that ruminant vegetarian species have. We have canines and the front-facing eyes of predatory species. We are clearly designed to eat meat.
The problem is, there are really too many humans for most of us to eat a meat-heavy diet. And factory farming is horrible and cruel for the animals. So in my mind, those are the problems, but meat is and animal products are extremely good for you.
Plant matter is NOT “95%” toxic. Hunter gatherer diets very consistently contain lots of varying plant matter - fruits, roots, seeds, herbs - and has been so for almost all of evolutionary history.
> The more you feed dogs meat and not fruits and vegetables
Yeah because they’re dogs. They eat lots of animals. But even wild wolves eat a decent amount of plants.
> In addition, plant matter is prevalent in wolves' summer diet, with 392 (74%) of 530 scats analyzed containing some type of plant material, largely grass (Graminae). This is consistent with summer observations of wolves consuming grass and other plant material.
Most animals will eat plants or animals if they’re available. Even deer will occasionally grab insects or small mammals.
> is never toxic when unspoiled
Also wrong. Trivial examples: polar bear liver, poisonous fish, some shark meat when pregnant, transmissible CJD and BSE, *parasites*, etc...
Yeah most plants ARE toxic to humans. Try eating the vast majority of leaves, plants, trees, grasses...you cannot. You will get sick and will not be able to digest it. Unlike a ruminant animal that has the digestive system to handle it. The plants that humans can actually eat out of the whole range of plants is extremely small...estimates range from 5% to 45% (and in the high case it only works with cooking it to pre digest it). In contrast, we can eat virtually all meat/animals. You have listed a tiny handful of examples out of the thousands of options but we can pretty much eat 99.9% of animals, including even poisonous snakes like rattlers, etc. Also I never said dogs and wolves don't eat plants, they are omnivores like us, nor did I say that humans should not eat plants. Merely that the idea that plants are healthier than meat is wrong headed and has no good evidence and plenty of evidence for the opposite conclusion. There are humans that almost solely eat animal products and those that almost solely eat plants, we are variable. The primary issue with meat is not that it is bad for you but that it is bad for the animals and the global ecosystem...apex predators like humans running 7+ billion deep is the problem,, if they're eating meat.
Oh I see now. Non nutritious isn’t “toxic”. You can eat as many leaves as you want, but you won’t die - maybe feel a bit bad and have gas, but that’s not toxic. Plenty of herbivores aren’t generalists as well, especially smaller ones and insects. And to say we can eat all animals is ... can you eat bones? Teeth? Hair? That’s a “toxic” part of animals.
Plants are not healthier than meat: yes
Meat is healthier than plants: not coherent, it’s like saying water is healthier than vitamins.
Meat would probably be a lot better for “ecosystems” if grown differently, just like plants. And there is a lot of marginal land much more fit for grazing than intensive cultivation.
Toxic doesn't mean it kills you, it means it's bad for you. A toxin. The point is, evolution designs organisms to not want to be eaten. Animals can hide and run away to not be eaten. Plants cannot, and they have therefore evolved all kinds of strategies to not be eaten, such as being toxic/poisonous, having thorns, etc. (of course, in the case of fruit, the plant DOES want that part to be eaten so the seeds can be distributed in scat, but it doesn't want the rest eaten, so the fruit is delicious and tempting while the rest of the plant is not).
I think in a head-to-head match-up, meat IS better for you than plants. Stop talking about teeth, feathers, and bones, we're talking about meat and dairy. Meat is far easier to digest than plants (doesn't cause gas, bloating, vomiting, and isn't so undigestible it just passes right through like much plant matter). It has all vitamins and minerals and on a per-calorie basis provides far more nutrients than plants. There is zero evidence that it is even possible to become obese on a meat-only diet and pure carnivores aren't fat. If you want to fatten up an animal, you give them corn and wheat, not meat. Notably, when Americans started getting fat in the 80s, that's exactly when their consumption of animal products went DOWN, as today, only about a third of calories come from meat and dairy, while in the past it was more like 50% or more.
The problem with studies comparing a vegan or vegetarian diet with a meat-eating diet is that they are not doing a true comparison, as they are comparing it with people who eat meat as maybe 30-40% of their diet, and everything else they eat is plant-based....french fries fried in vegetable oil, breads, etc. To make a true comparison of which is healthier, you would compare a vegan diet with a diet that solely consists of animal products...for example the Massai, which have a diet that is almost 100% milk, meat, and blood. If you're going to go with a direct comparison, I'd place my vote on the animal products diet being healthier. Though it is ideal to eat a range of things. My ideal diet is about 70% meat and dairy and 30% fruits, vegetables, and grains.
Minor point, but I don't think you're right that meat is easier to digest than (edible) plants. In fact, it's probably generally significantly harder, which is why we need this complex two-part digestive tract, with radically different pH in each. Proteins need to be pried apart before they can be digested, which requires the low pH and acid of the stomach, and only then can they be chopped up, which happens partly in the stomach and partly in the small intestine.
By contrast, carbohydrates (with the exception of the cellulose to which you refer further up) and fats are simple to chop into bits, chemically speaking, and readily burned or reconfigured for storage.
Also, amino acids are pain to store, so the body apparently doesn't, and they're dangerous to recycle and burn, because the amino group cannot easily be oxidized (in higher animals, there are bacteria that can) and when it is pried off of the amino acid it readily forms ammonia, which is exceedingly toxic -- requiring some fancy biochemistry in all animals higher than fish to get rid of the stuff safely. In humans were obliged to throw away 1 perfectly good carbon atom for every 2 N atoms we don't want, which is not bad but obviously inefficient.
You're certainly right it's more nutritious of course. The most nutritious food for humans would be other humans, ground up and properly sterilized.
I think most of the evidence of poor health outcomes is related to 'red meat', not animal foods more generically. And that evidence is typically about "high levels" of red meat consumption. A search "red meat" on pubmed.gov produces a large number of studies; none are randomized trials, and a lot are junk, but the gist is that high levels of red meat consumption are linked with different digestive cancers, heart disease, and type II diabetes, among others.
Anecdotally, I find red meat hard to digest, and have avoided it for decades; more recently, I contracted the alpha gal allergy and break out in hives if I eat any mammal that is not a primate. So I don't really have to think about this any.
I would emphatically disagree with red meat being unhealthy. There’s the HG comparison - bison hunters were fine. Red meat isn’t correlated with fat either - bison meat is leaner than chicken (and is delicious).
> > Red meat intake was not associated with CHD (n=4 studies; relative risk per 100-g serving per day=1.00; 95% confidence interval, 0.81 to 1.23; P for heterogeneity=0.36) or diabetes mellitus (n=5; relative risk=1.16; 95% confidence interval, 0.92 to 1.46; P=0.25). Conversely, processed meat intake was associated with 42% higher risk of CHD (n=5; relative risk per 50-g serving per day=1.42; 95% confidence interval, 1.07 to 1.89; P=0.04) and 19% higher risk of diabetes mellitus (n=7; relative risk=1.19; 95% confidence interval, 1.11 to 1.27; P<0.001).
From the LW post study below, but that’s not super reliable either.
I'm not sure that the HG comparison is meaningful, most of the effects of red meat seem to be on chronic diseases that occur later in life - beyond the typical lifespan of an HG - also, it probably doesn't matter what you eat if you are running many miles per day to catch it. And of course, just because HGs did it, doesn't mean it was good for them.
Most current evidence is not regarding bison but regarding farmed beef and pork. But here are a few positive meta-analyses:
"Thirteen published articles were included (ntotal = 1,427,989; ncases = 32,630). Higher consumption of unprocessed red meat was associated with a 9% (relative risk (RR) per 50 g/day higher intake, 1.09; 95% confidence intervals (CI), 1.06 to 1.12; nstudies = 12) and processed meat intake with an 18% higher risk of IHD (1.18; 95% CI, 1.12 to 1.25; nstudies = 10)."
"Positive associations were observed for red (RR per 100 g/d, 1.10; 95% CI, 1.03-1.18) and processed meat (RR per 50 g/d, 1.18; 95% CI, 1.04-1.33). None of the other food groups were significantly associated with breast cancer risk."
"The purpose of this umbrella review was to evaluate the quality of evidence, validity and biases of the associations between red and processed meat consumption and multiple cancer outcomes according to existing systematic reviews and meta-analyses. The umbrella review identified 72 meta-analyses with 20 unique outcomes for red meat and 19 unique outcomes for processed meat. Red meat consumption was associated with increased risk of overall cancer mortality, non-Hodgkin lymphoma (NHL), bladder, breast, colorectal, endometrial, esophageal, gastric, lung and nasopharyngeal cancer. Processed meat consumption might increase the risk of overall cancer mortality, NHL, bladder, breast, colorectal, esophageal, gastric, nasopharyngeal, oral cavity and oropharynx and prostate cancer. Dose-response analyses revealed that 100 g/d increment of red meat and 50 g/d increment of processed meat consumption were associated with 11%-51% and 8%-72% higher risk of multiple cancer outcomes, respectively, and seemed to be not correlated with any benefit."
"This systematic review was conducted according to the methods recommended by the Cochrane Collaboration and the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. Relevant papers published through March 2020 were identified by searching the electronic databases MEDLINE, Embase and Scopus. All analyses were conducted using ProMeta3 software. A critical appraisal was conducted. Finally, 17 studies met the inclusion criteria. The overall effect size (ES) of depression for red and processed meat intake was 1.08 [(95% CI = 1.04; 1.12), p-value < 0.001], based on 241,738 participants. The results from our meta-analysis showed a significant association between red and processed meat intake and risk of depression. "
"On average, participants who reported consuming meat regularly (three or more times per week) had more adverse health behaviours and characteristics than participants who consumed meat less regularly, and most of the positive associations observed for meat consumption and health risks were substantially attenuated after adjustment for body mass index (BMI).... Higher consumption of unprocessed red and processed meat combined was associated with higher risks of ischaemic heart disease (hazard ratio (HRs) per 70 g/day higher intake 1.15, 95% confidence intervals (CIs) 1.07-1.23), pneumonia (1.31, 1.18-1.44), diverticular disease (1.19, 1.11-1.28), colon polyps (1.10, 1.06-1.15), and diabetes (1.30, 1.20-1.42); results were similar for unprocessed red meat and processed meat intakes separately."
All of these are observational studies (or meta-analyses of observational studies), so come with the corresponding caveats. But there is a notable lack of evidence pointing the other direction, so I would not dismiss it out of hand, much less "emphatically".
No. HGs regularly live to their sixties and seventies, with only a small few making it to 80. That’s plenty of time for these to show up. Average lifespan is heavily weighted by disease and early mortality. And it doesn’t
Large scale diet observational studies are bunk IMO. And in terms of observational evidence the other way, there ... is some in the post below. Also shouldn’t it be kinda sus that red meat causes all bad conditions at the same time, according to correlations? That’s possible, but maybe it’s broader correlation? A bigger point is that nomadic herders also don’t have these despite eating herded meat
I'm not going to agree that all large scale observational studies of diet are bunk; but I think they all fail in isolating the effects of diet from other lifestyle choices. Nomadic herders and HGs are much more physically active than the typical Westerner, which probably trumps all other lifestyle factors, including diet. The observational studies try to 'adjust' for things like exercise etc but this is largely imperfect analytically and rarely reflects the type and amount of activity. It is notable that red meat, esp processed red meat, has very little evidence, observational or otherwise, of health benefits.
I’m not sure how significant exercise is relative to diet. Both important obviously. My guess is red meat, especially if it’s from a heritage variety that eats grass or something, is fine, and that since nobody eats it like that along with an otherwise healthy diet the population studies don’t notice. Not sure precisely how it plays out though.
Recent studies seem to agree that it helps. However, I’m still suspicious. Also,
> An important enzyme in this [coQ synthesis] pathway is HMG-CoA reductase, usually a target for intervention in cardiovascular complications. The "statin" family of cholesterol-reducing medications inhibits HMG-CoA reductase. One possible side effect of statins is decreased production of CoQ10, which may be connected to the development of myopathy and rhabdomyolysis. However, the role statin plays in CoQ deficiency is controversial. Although these drug reduce blood levels of CoQ, studies on the effects of muscle levels of CoQ are yet to come. CoQ supplementation also does not reduce side effects of statin medications
Wat? What if that’s the pathway for benefit? I seriously doubt it. Still weird in other ways...
> Flavonoids/anthocyanins
Don’t supplement this, just eat wild / heirloom variety fruits they have lots of dye
He’s right about the cholesterol story being funky
> Red meat intake was not associated with CHD (n=4 studies; relative risk per 100-g serving per day=1.00; 95% confidence interval, 0.81 to 1.23; P for heterogeneity=0.36) or diabetes mellitus (n=5; relative risk=1.16; 95% confidence interval, 0.92 to 1.46; P=0.25). Conversely, processed meat intake was associated with 42% higher risk of CHD (n=5; relative risk per 50-g serving per day=1.42; 95% confidence interval, 1.07 to 1.89; P=0.04) and 19% higher risk of diabetes mellitus (n=7; relative risk=1.19; 95% confidence interval, 1.11 to 1.27; P<0.001).
I’m somewhat concerned about what “processed meat” actually is - I certainly don’t eat any, to the point where I’m slightly nervous about the butcher grinding meat for me ... but I really have no clue what that means. Sliced turkey? Ham? Pepperoni? Salami? Fried chicken? If curing is processing, is smoked salmon bad? Canned tuna? And hunter gatherers certainly fermented meat and ... probably some salt preserved their meat? Celery powder? Drying? Is fried chicken not processed meat?
Also this is one of a hundred meta analyses. Do the others agree?
The medical errors study he references is wrong, but his advice there is very much right that lots of doctors are incompetent. Try to get a competent doctor and double check your symptoms and treatments with doctor friends (you’re here, smart, smart friends, can evaluate how competent they are). Not doing this will probably kill you. Doing so has saved several of my friends from likely death. Check everything!
I had to go two links deep to find the definition:
> “processed meat” was defined as any meat preserved by smoking, curing, or salting or addition of chemical preservatives, such as bacon, salami, sausages, hot dogs, or processed deli or luncheon meats, and excluding fish or eggs [24];
Yeah, it's so frustrating that people talk about "processed" like they know what it means. To me, cooking something or blending it sounds like "processing" but without any obvious reason this would be bad. And likewise people talk about "preservatives" being bad but why would that be *generally* true of *all* preservatives?
As far as I can tell all the above sort of process meat in common is ... salt, nitr[ai]tes, and (as a minority, bacon and sausage and hot dog don’t) drying. And maybe other stuff? That seems like a narrow series of causes for what seems to be a significant portion of America’s health problems. Which seems somewhat hard to believe but idk. Certainly salt preservation isn’t new, idk about nitrate
The Middle Ages made pâté a masterpiece: that which is, in the 21st century, merely spiced minced meat (or fish), baked in a terrine and eaten cold, was at that time composed of a dough envelope stuffed with varied meats and superbly decorated for ceremonial feasts. The first French recipe, written in verse by Gace de La Bigne, mentions in the same pâté three great partridges, six fat quail, and a dozen larks. Le Ménagier de Paris mentions pâtés of fish, game, young rabbit, fresh venison, beef, pigeons, mutton, veal, and pork, and even pâtés of lark, turtledove, cow, baby bird, goose, and hen. Bartolomeo Sacchi, called Platine, prefect of the Vatican Library, gives the recipe for a pâté of wild beasts: the flesh, after being boiled with salt and vinegar, was larded and placed inside an envelope of spiced fat, with a mélange of pepper, cinnamon and pounded lard; one studded the fat with cloves until it was entirely covered, then placed it inside a pâte.
In the 16th century, the most fashionable pâtés were of woodcock, au bec doré, chapon, beef tongue, cow feet, sheep feet, chicken, veal, and venison.[22] In the same era, Pierre Belon notes that the inhabitants of Crete and Chios lightly salted then oven-dried entire hares, sheep, and roe deer cut into pieces, and that in Turkey, cattle and sheep, cut and minced rouelles, salted then dried, were eaten on voyages with onions and no other preparation.[23]
I honestly would have no idea how to stop eating processed meats if I had a normal diet. Apparently hot dogs aren’t just sausages? Is sliced turkey “processed”? How does bacon have enough preservative in it to be comparable to salami? And why is smoked salmon, presumably preserved with the same chemicals, excluded - does one ha e to not eat that?
Based on the examples, I think poultry breasts and thighs, ground meat, and steaks all qualify as "unprocessed."
It's why I went to dig into the definition. If I got the definition wrong, well, then it's horrible advice (having to dig through two separate links to get a definition that turns out to be wrong).
I’m still not sure about ... sliced turkey breast in a plastic bag. I think it is processed, as it’s meat preserved by ... celery powder, which has nitrates as the preservation mechanism.
There were some studies of Adventists that found vegetarians had better health outcomes. Perhaps this is where we get the idea that animal products are bad. But I am not aware of any studies comparing healthy meat-eaters with healthy veg or vegans. Vegetarians seem to be a health-conscious group, so it is absolutely unfair to compare them with everybody else. Too many confounds.
I seriously doubt this is true. Reason being: for our evolutionary history we have been eating animal foods as a part of our diet. All groups of hunter gatherers as well as agricultural cultures studied have eaten some meat and for most it’s a significant energy and diet contributor (or fish or other animal products but mostly meat). [3]Hunter gatherer populations have individuals who regularly live to 60 and 70 and somewhat rarely 80, [1] and despite their meat eating don’t show signs of chronic diseases like we have in modern life. [2]
What about the difference between the meat we eat, and the meat hunter gatherers eat? Are there any studies on the effects of eating only factory farmed meat vs. eating wild animal meat and/or meat from small-scale-produced domestic animals?
In terms of just number fat content, yes, HGs that eat fatty meats, such as marine mammals meat, so not show such chronic diseases, I think, as posted about last thread. I dunno beyond that.
I’m also pretty sure beef and pork and other mammals are totally fine. Existing studies tend to pack them in with every other aspect of the modern diet, and plenty of hunter gatherers are other large mammals. I’d still recommend eating high quality meat though, raised on grasses if grazer or a mix of farm food if something like a pig, as opposed to commercially farmed ones.
This reminds me of the xkcd comic about "if it were real, it'd be monetized by now." If veg*nism were better in any real capacity, then we would expect to see institutions and teams win using it as a competitive advantage. As far as I'm aware, this has never been true.
Historically, veg*nism is mostly limited to religious ascetic traditions as a form of self-denial or self-sacrifice - it has never been about "we can live longer, be healthier, be stronger," etc until very recently. There are no elite military units that avoided animal products.
"But gladiators were vegetarian!" Sure - they were slaves, and were fed cheap food that would fatten them up to make fighting more interesting. Not really what you'd pick as a counterexample here.
I don't know how expansively you've read in nutrition space, but I'd be ***super*** cautious about drawing any conclusion as firm and broad as "animal foods are bad for us". Everyone from the vegans to the carnivores can make a plausible, evidence-based argument for their position.
In general, the extremist position often goes too far. As far as I can tell, limiting intake of certain things down to a certain point (or increasing up to a certain point) is good, but limiting or increasing beyond those points doesn't have additional benefit.
It seems like everyone can find examples of “I switched to X and saw improvements in weight/cholesterol/etc.” I’d bet that the biggest confounder is being aware of what you’re eating. Most people don’t pay attention to what they’re putting in their bodies, so it makes sense to me that most diets offer a comparative advantage simply by forcing consumers to be conscious of their dietary decisions. Not that this is the only thing that matters, but I’d argue it’s 90% (or more) of the effect.
At this point, I believe there are quite a lot of people (certainly not a majority) who have tried one deliberate approach to eating after another. They're not just switching out from eating a lot of junk food and fast food. They've done keto, vegetarian, vegan, soylent, etc.
There's also the big selection bias in that these are a) people who wanted/needed to switch (vs people who already had something they were content with) and b) people who found the new choice substantially better (vs those who switched back or kept shopping). The other realities don't make as great endorsements.
Yeah. Also dietary restrictions mean you probably can’t eat the combo burger or the smoothie or whatever because it probably had a restricted element in ir, so you’ll eat less of that. Many many many other con founders
What's your ethnicity? Cattle herding societies, such as northern Europeans, East African, and South Asian have developed the gene to digest lactose (last I heard independently in the case of East Africans vs Indo-Europeans). East Asians and SE Asians don't have that gene, and have trouble digesting dairy products. Likewise, meso-American populations developed the gene that codes for the enzyme that can digest bean proteins. Thus they don't fart like many people European descent do after eating beans, and they can digest the bean proteins more efficiently than people of European descent. Likewise, northern Europeans seem to do better on diets with plenty of read meat.
FYI, if you're depending nutritional studies to guide you, it's worth noting that most nutritional studies haven't proven to amenable to being reproduced...
How interesting. I've come to the opposite conclusion, albeit using a far more anecdotal approach. I'm about three years into a ~ 90% carnivore diet of red meat, organs, eggs, cheese, butter, and tallow. Before this, I dabbled with vegetarianism, veganism, and run of the mill "Whole Foods paleo" style eating. At 34, I feel better than I ever have without much of a change in lifestyle besides the alternating diets; I've always been exceptionally active and sporty.
The other 10% of my diet is mostly fruits and the occasional breakfast cereal binge.
I started eating like this a few years ago after seeing how it positively affected two close friends of mine; they went from soft, slow, and tired, to impossibly strong, healthy, and active. They made a lot of changes though, including adding an exercise regimen to their lives.
Just thinking out loud, it seems odd to me to consider that over the course of eons we wouldn't have adapted to eating animal foods, and that evolution would have somehow plagued us with poor health as a consequence for eating what has (always?) been a significant part of our diet.
If you remove animal foods, what are you replacing them with? Most of the food in the grocery store is evolutionarily novel, which seems like it should be largely less than ideal from a health perspective.
I eat whatever the fuck I want and as much as I want (little Caesars and taco bell are staples) and never feel tired. But I get ~1-2 hours of intense exercise every day. And I feel much better now than this time last year, when I was eating a no sugar, home-made healthy diet of whole foods but not exercising nearly as much (broke up with girlfriend). My point is that adding an exercise routine is such a confound here, it obliterates the significance of diet in my view
Doesn't that get extremely expensive? If you don't mind me asking, what is a typical week's groceries for you? I eat pretty clean with generally tofu, chicken, potatoes, milk, frozen green veggies, fresh fruit making up my diet, and my impression is that pushing into more protein and more direct animal protein gets very pricey and hard to store with once a week shopping very quickly
It does get expensive, but I consider it as an investment that will save me a lot of heartache and money down the road. I've done a cow share before with friends, which significantly reduced the cost per unit, and I don't often buy the more expensive cuts of meat. I buy frozen berries more often than fresh. I am under no illusion that such a diet is accessible to most people and I feel very fortunate in my position.
I thought this had been settled long ago. I suspect posters here tend to be older because the young among us are going enthusiastically to vegetarianism. I am far from young myself. Veg for almost fifty of my 79 years, but I am the picture of health. A yoga practice for all that time helped, I am sure. As for the young, my six kids, ranging from 55 to 35 years of age are also lifelong veg and equally as healthful. We take no fancy supplements. A varied diet is all that is needed. There are many reasons to embrace a vegetarian and (and especially vegan) diet, way too numerous to go into here. They fall into the categories of health, ethical, environmental, spiritual, financial and moral.
Several ongoing studies of 7th Day adventists, beginning in the 20th century, should convince anyone. (one large cohort eats meat. The other does not. All are of the same racial background.) Conclision: "Vegetarian diets in AHS-2 are associated with lower BMI values, lower prevalence of hypertension, lower prevalence of the metabolic syndrome, lower prevalence and incidence of diabetes mellitus, and lower all-cause mortality."
You're also overlooking the fact that your average vegetarian is far more likely to engage in other healthy habits than your average non vegetarian, such as exercise, not smoking, and having a higher SES.
I have a hard time even making sense of the idea that a vegetarian, and especially a vegan, diet could possibly be a reliably healthy way to eat long term. Not a single indigenous society in history has voluntarily adopted a vegan lifestyle. The remaining indigenous societies we have today universally prize animal foods above all else. Why would evolution select for vegetarianism throughout a long history of eating meat whenever possible?
The argument is that it's similar to the situation for sugar. Why does evolution select for little kids to loooove sugar so much that, if left to their own devices, they would subsist entirely (for as long as they lived) on candy and ice cream? The conventional answer is that: in the natural environment, plain sugar is exceedingly rare, but also a valuable source of concentrated calories, so there is no harm and some benefit in making Australopithecines love it enough to go to the trouble of carrying a handful of apples with them when they stumble across an apple tree once in a blue moon. They can't possible get too much, and prodding them to get a little more (especially the kids) is adaptive.
The argument is, the same thing applies to meat. In the primitive state, where you only get fresh meat by hunting down something with hooves and horns that runs about 3x faster than you, beating out any more fearsome carnivores in the vicinity, and successfully killing it with your bare hands, or a sharp stick, without being killed (or badly wounded) yourself. This is most likely a pretty damn rare occurence. So if getting meat *at all* is something your Australopithecine can count on *maybe* once a week in the dry season, there is no harm and considerable good for evolution to maximize the t aste for the stuff. Like the sugar, you can't possible get too much, and a taste that encourages you to put the extra effort into chasing the cute baby antelope with the broken leg is adaptive.
In both cases, the argument goes, evolution did *not* design us to cope well with a strange world of plenty verging on excess, where we *can* eat ice cream or steak for every meal of every day if we want.
I'm not saying the argument is correct, but it's not stupid or obviously wrong.
Very similar situation at the same age. Only berries for fruit, to keep the carbs low, but more or less all leafy greens, eggs and cream, and meat.
Not to be too uppity, but what always strikes me about the diet is how full of energy I feel compared to everyone else. I know it’s hip to be tired all the time , especially in the doomer academic crowd I’m in, but I honestly just never get tired since I’ve removed carbs.
I've been vegan for almost 20 years (mainly for animal-rights reasons). My bloodwork has been excellent, much better than it was when I was omni. My health is pretty good, and I also feel great. I have no regrets, and I've converted some friends and family members to veganism. They are all doing well on it.
Veganism is a cult. Vegetarian is a way of eating. It serves some people well, and others quite poorly. Carnivore is also a way of eating. It serves some people well, and others quite poorly. People are different, go figure!
Warning that the first sentence here is the kind of thing that will get you banned if you repeat it too often. It's insulting and not backed by any argument.
I thought that was generally agreed-upon. Would it help if I backed it with an argument? E.g. cults require you to wear special clothing, and vegans are required to avoid animal products in their clothing.
Worth mentioning is that bloodwork, while far more objective than personal sentiments, do not enjoy universal agreement from the scientific community as far as markers for good health. My LDL is substantially higher now (109) than it was before going carnivore, but my HDL is great (67) and my triglycerides are fantastic (18). Carnivore evangelists like myself will say this is fine, but I think the conventional view would suggest that my LDL levels are worrisome, despite my exceptionally high degree of physical fitness.
Everyone’s confused about LDL. But I’m pretty sure the hundred million people who stop eating milk and eggs and meat because “cholesterol” are making serious mistakes. Although it might work anyway just by being a general dietary restriction, it’s still probably bad
To clarify, I don’t think that those people are not getting enough cholesterol if they stop eating meat or dairy. But that stopping eating meat or dairy isn’t a good idea for their health goals (heart disease, hypertension, whatever), and that eating meat is better for health for a myriad of other reasons.
Funny, I gained twenty pounds for the two years that tried to be a vegan and my cholesterol counts skyrocketed. I suspect it depends somewhat on your genes...
Vegetarian for 10 years. Worthlessly hard diet to maintain health with that only became apartment after 10 years post return to 80% KETOISH diet with grass fed meats and wild meats. Low carb. Intermittent fast. Health indicators across the board better. Blood work, endurance, muscle build, mental health, sex drive, etc. Confounding but i won't trade my current diet for vegetarian again.
Bad compared to what? If you compare a typical animal-based diet to the healthiest possible vegan diet (including supplements for things like creatine), sure, the animal-based diet will come out looking bad. But if you compare the healthiest animal-based diet to a vegan diet full of industrially processed food products, the animal-based diet will come out looking pretty good. And of course there are individual differences between us all.
There is the famous "China Study" which claimed from population analyses that red meats are bad. And many other studies too (for instance https://link.springer.com/article/10.1007/s10552-021-01478-2) . But confounders abound.. and I still I think stuff like 90% grass fed beef consumed in moderation (su vide cooked ideally, not grilled/charred) or salmon is healthy.
Black hole information loss and the fate of evaporated black holes are both open questions. If the former answer is "it happens" and the latter answer is "they stick around but immediately re-radiate anything that falls in" then an "evaporated" black hole is still a matter-to-energy converter (via absorbing matter and re-radiating it as a matter/antimatter/photon mixture), which is probably not a good idea to put inside the Earth (though plausibly it could have a low enough eating rate to not be a problem).
LHC fears were definitely overblown, due to the "cosmic rays are more potent and we're still here" thing, but if we get up to actually-unnatural-on-Earth collision energies it's probably a good idea to do it in space whether or not our current theories predict doom (we do these experiments to test them, after all - they could be wrong!).
The Schwarzschild radius of a 80 kg black hole is 1e-25 metres. Even assuming it would just absorb anything coming within 100 Schwarzschild radii of it (which is ridiculous -- black holes aren't vacuum cleaners), a cylinder of 1e-23 m radius going all through the Earth has a 4e-39 m³ volume, so 2e-32 grams at the average Earth density -- IOW you'd have to be very lucky to even catch one electron.
> I'm also now wondering why some people freaked out about the possibilities of LHC creating Black Holes
They were trolling (or had no idea what they were talking about, or both).
I seem to recall certain candidate theories predict no Hawking radiation, but they also predict no way for the LHC to create black holes, so to predict a non-evaporating black hole at LHC you need to mix-and-match theories without paying much attention whether they're even consistent with each other.
Furthermore, assuming Lorentz invariance, the LHC can't possibly do anything that ultra-high-energy cosmic rays don't already do every day, so you'd also have to assume Lorentz invariance violation.
All in all, those people were about as reasonable as if in 1968 they had freaked out about the possibility of reading the bible in lunar orbit crashing Uriel's machine to keep the world bound to mathematical laws.
An upper bound to the amount of mass the black hole could absorb is the mass that could reach the black hole within the hole's lifetime if that mass were instantly accelerated to the speed of light towards the hole. I.e. a sphere with a radius of the hole's lifetime times the speed of light.
43 light picoseconds is 1.29 cm. A sphere with that radius has a volume of 5.2 ml, so if your black hole is formed inside a body of water it will absorb less than 5.2 grams of matter before evaporating. Even if it's surrounded by Osmium (the densest naturally occurring material), the speed-of-light ceiling on absorbed mass would be 115g. Which is still a rounding error in our original guesstimate of an 80kg initial mass for our black hole.
Also, this upper bound is a stupendous overestimate: 80 kg of mass is nowhere near enough to nearly-instantaneously accelerate material half an inch away to anywhere near the speed of light.
Is there a threshold that exists within the Earth where matter can enter the blackhole faster than it evaporates? Or does it take something like a neutron star to do that?
It looks like the threshold density for that would be around 15 kg/ml. That's a lot denser than the cores of active stars, but a couple orders of magnitude less dense than the electron-degenerate matter that makes up white dwarfs (about 1,000 kg/ml). The neutron-degenerate matter that makes us neutron stars is several orders of magnitude more dense still (10^11 kg/ml).
I think the "vacuum cleaner" issue was that the outer core is a liquid under high pressure, so if a black hole lasts long enough to absorb just one more particle, another particle flows in to replace it extremely rapidly.
(But it'll take at least a minute to fall through the mantle, and I've learned here how massive a black hole has to be to last a minute.)
1 kilotonne (1 million kg) is probably around what you're looking for, as according to the formula that should last 84.1 seconds. 100 kilotonnes lasts 2.7 years.
(Lifetime scales with mass^3, so scaling by orders of magnitude either way from those is fairly mathematically simple.)
I'm fascinated that something so illegal has a clearnet website, with a phone support line. How do you evade authorities? Isn't the risk with such enterprise enormous?
I wonder if it’s just a scam and they just take your money and don’t send fake bills. Law enforcement has to prioritize and it takes so much paperwork to take down a website (that they’ll immediately re register) or manpower and investigations to track down who’s shipping it that it may take a while to catch even if you do get investigated. And yeah they can just be in Russia or something.
I'm guessing the actual operators are based out of a country where crimes against Americans/the American state committed via internet aren't prioritized.
I have been doing a lot of reading about trying to create an AI that has the somatic responses of human beings which I guess is the issue I’m hung up on. So I’ve been trying to educate myself to the state of the art. In light of this. I realize some of my comments are a little like a middle schooler sitting in on a PhD class and asking rather dopey questions. I’m a little embarrassed.
However I am struck by two threads in these comments; discussion of AGI And the questions about parenting and what might be the most effective way to raise one’s child.
That’s kind of fascinating in itself to me.
I have raised a couple of kids and some of the issues seem to really crossover
THE WILL TO LIVE is a subject that I raised earlier in this thread, in the context of AGI.
I received some very instructive replies which I appreciated. As I mentioned in that post, I have no scientific or mathematical training.
Certain people were kind enough to indulge me and explain things to me.
The best lesson was someone pointing out to me that a self driving car behaved very much like something that had a will to live. That was very sharp.
So I’ve been pondering.
I would like to amend my inquiry:
The Will to Power.
How does that unfold in the pursuit of general artificial intelligence?
What if there is a trait such that
(1) net worth and STEM citations are strongly correlated with the trait
(2) the trait is strongly inheritable
(3) there are visible external characteristics correlated with the trait?
I suspect this might be true for IQ and to a lesser degree height. What should a moral society do at that point?
Intresting question. Well like today, we have fat and slim people. Fat people are still made fun of all time, even though they shouldn't be. In a moral society, people would still be treated the same even though they would be able to tell who is richer or poorer depending on how they look. But what is likely to happen is that people will prefer to date/hang out with someone with the "rich"-looking trait than the one of the poor looking trait.
In choosing a spouse, surely being overweight or impoverished would (should?) make a difference. There may be benefits to holding less superficial values, but likewise for observing these common biases.
To what extent is choosing friends/acquaintances/business partners a matter of degree vs a fundamental difference?
At a certain point in my life the defining characteristic of choosing a spouse was the attribute of kindness
What if some politically influential parts of society complain about racial inequity?
Usually they make the assumption that the distributions of intellectual ability or aggressiveness are the same for all the visibly identifiable subpopulations. And a logically consistent person who shares that assumption would have arrive at the same conclusion.
Scott you are one of the few American Jewish public figures who has talked about the Jewish intellect without being apologetic.
It's understandable that many Jews shy away from this topic; after all gentile geniuses tend to be excessively humble too.
Do you think there are differences on average between the various gentile ethnic groups?
If you think there are such differences do you think the general public should be educated about it to counter the racial inequity narrative?
Also in a country that provides a lot of assistance to the poor citizens do you think everyone should have the right to make as many little citizens as they want? Or maybe depending on their genetics that should be controlled?
https://scitechdaily.com/inescapable-covid-19-antibody-discovery-neutralizes-all-known-sars-cov-2-strains/amp/
Is this nonsense or worth pating attention to?
paying….
I wonder if anyone's done further research into nBack intelligence.
Gwen has done a terrifying amount:
https://www.gwern.net/DNB-FAQ
The high level takeaway I got was that it doesn't seem to generalize to tasks other than nBack.
As in, nBack as a method to increase IQ.
When do you think people will develop the ability to artificially create zygotes with given nDNA and mtDNA?
So for example babies with Yoruba skin, Dutch height, Ashkenazi intelligence and Japanese eyes?
Also when do you think artificial wombs will be available?
has anyone else stopped getting emails of new posts?
I'm frustrated and sad that good things I liked might end or might at least be different because people I liked did dumb shit.
Current Affairs was my favourite magazine, but then NJR handled the recent staff thing really badly (he flip flopped on how much authority he wanted, it blew up during a hiring decision between two great candidates—the biggest waste is that the candidate he liked would have been the staff's fave too he just didn't want the decision to be democratic anymore!). Now Lyta, Nate, and Adrian have all written open letters about it and the mainstream and conservative media are talking about it and not all publicity is good publicity, and who knows if the magazine'll even last.
Aubrey de Grey's the bearded longevity-research guy who founded the SENS foundation. Now there are sexual harassment allegations against him; who knows how it'll turn out, but also he idiotically interfered with the investigation (emailing a mutual of one of the allegers to persuade her to implicate someone else, instead of just talking to the investigator), so SENS obviously had to let him go. His idiotic decision is bad for himself of course but also bad for longevity research, which really needs to behave professionally to get respect and shake its image as just, idk, Peter Thiel's rich-people-out-of-touch-with-normal-people's-struggles-pipe-dream, and show that it's a safe and thriving place for young scientists including women to work.
David Sabatini, another big name longevity-research guy (an expert on rapamycin and mTOR) also just got let go from the Whitehead Institute for sexual harassment allegations, and in this case the reaction on twitter makes it sound like it was a really open secret so I'm guessing that they're true whatever they are (no details are public).
The pandemic has done disproportionate damage to worse-off people. What would pandemic response look like in a more egalitarian society? How would risky in-person work be handled?
What David Piepgrass said. As for risky in-person work, what couldn't be eliminated might be done by volunteers from the less vulnerable part of the population (young, healthy), being paid a significant risk premium - and supplied with decent PPE.
Of course this implies that old Joe with emphysema, the expert/foreman in the meat packing plant, would be able to get his income replaced while not working, and/or training for a new role. That would be a very hard sell in the US. And probably a hard sell to him too; few people want to become newbs again.
It's arguable that the reason stockpiles of PPE were inadequate at the start of this was the never ending drive for higher profits and lower costs. I see recent stock market darlings causing major damage locally (hello, PG&E) by cutting corners in the past to enhance their bottom line. There's no reason a government should do the same, not being driven by profit - except that habits of thinking may not follow logic. Hence the expired and non-existent items in US and other government stockpiles.
So there might actually have been adequate PPE at the start of this, at least for those with the highest risk work (treating the sick).
If the concept of "egalitarian" could include the culture of the people themselves, we'd have a lot less Covid going around since people would take vaccines to help their community reach herd immunity. I watched a guy named Yuri yesterday, friend of Weinstein, debunking lots of claims by Weinstein. I thought he did a good job, but he made classic mistakes like decrying the anti-vax campaign as immortal because it's, you know, killing people. Well, judging by YouTube comments, most red tribe members couldn't hear anything he said after that.
An egalitarian society would say "Covid tests are free, and if you catch Covid we'll pay for you to quarantine in a hotel and immediately pay replacement wages for the work you missed. Also, we need you to think hard about who you came into contact with so they can be tested immediately."
Responding to Carl Pham:
How long was it that the English tried to ride Herd on the Irish?
It’s not over yet but, let’s face it, you can’t say the BRITs walked away winning.
At the very least the Irish stole the English language and transcended it.
Totally tribal moment…
😆
> The Brits certainly fixed the Boer's wagon.
I don’t think that’s how it worked out in the end
> Oh come on. Where's the National Socialist German Worker's Party these days? What happened to Tojo? The Brits certainly fixed the Boer's wagon, home guerilla advantage notwithstanding. There are no Nationalists left in mainland China, and no followers of the late President Thieu bombing the occasional railroad bridge in Vietnam. It is certainly possible to win against a weaker opponent even if he's got the home-ground advantage, can melt into the countryside, is willing to live in caves and eat rats and don suicide bomb vests. It's entirely possible to wipe those people out, root and branch. But it requires focus and commitment, and quite often some pretty ugly decisions. You have to be damn sure that's what you want to do. Being half-assed about it, and not entirely certain what you're trying to achieve definitely doesn't work, and never has.
I don’t think that all of your cited examples are at all applicable.
Nazi Germany certainly not, China, certainly not.
The one exception is Vietnam, and we all know how that turned out.
That kind of war is a waiting game.
And a test of commitment.
Am I the only one that gets this issue, where I get an email that someone has replied to me and I hit reply and for a brief moment my comment is highlighted on my screen and then it defaults to some random place in the order of comments?
But meet us halfway and at least tell us what email software you are using!
Mail on IOS
Sorry, not an IOS user...but maybe David is!
This is a pretty good indication that there is something about mail in iOS and the comment section here that doesn’t play nice together. I will try it on my computer and see if it persists there.
You're not alone.
Sometimes people's blood pressure drops after eating. For some of those people, it drops into dangerous territory, but I wonder whether eating frequently would help some people lower high blood pressure.
I work as a contractor for the US Federal Government, and their rollout of the vaccine mandate has been troubled to say the least. A brief timeline:
2 weeks ago: They are developing a plan to implement Biden's executive order requiring vaccines or recent COVID testing to enter Federal facilities. More information will be released soon.
Yesterday: Starting two days from now, everyone must either sign a sworn affidavit saying they are fully vaccinated or provide a negative COVID test taken within the last 3 days. They aren't sharing vital details like who will pay for the tests, how security officers will evaluate COVID testing results, how they will verify who has signed the affidavit, and how this won't cause massive delays as hundreds of employees enter the building at the start of the shift. You still need a test even if you have received a vaccine if it has been less than two weeks since your last dose, providing an incentive to get J&J rather than Pfizer/Moderna. Most of this doesn't matter though, as most of the unvaccinated will just lie on the form knowing there probably won't be any negative consequences for doing so.
Today: Implementation of the mandate has been delayed. More information will be released soon.
Scott's recent "cost-benefit analysis are not done often enough" (paraphrased) made me think of this 1999 paper.
Parachuting for charity: is it worth the money? A 5-year audit of parachute injuries in Tayside and the cost to the NHS https://pubmed.ncbi.nlm.nih.gov/10476298/
Conclusion: Parachuting for charity costs more money than it raises, carries a high risk of serious personal injury and places a significant burden on health resources.
That is a sobering bit of data.
On the other hand, it seems like it's effectively an inefficient way to transfer money from NHS to your preferred cause.
"The Deeper Crisis Behind the Afghan Rout: Observers abroad see the culmination of decades of American incompetence." By Walter Russel Mead | August 23, 2021
https://www.wsj.com/articles/afghan-rout-crisis-foreign-policy-forever-war-jihadist-taliban-biden-allies-moscow-beijing-11629750508
* * *
This isn’t a conventional credibility crisis of the kind President Obama faced when he backed down from his Syrian red line. America has demonstrated its commitment to Afghanistan for 20 years and had no treaty obligation to defend the former Afghan government. A competently executed withdrawal could have enhanced American credibility among some Pacific allies, especially if it was accompanied by clear steps to build up U.S. forces in East Asia.
The Afghan debacle doesn’t create a crisis of belief in American military credibility. Informed global observers don’t doubt our willingness to strike back if attacked. The debacle feeds something much more serious and harder to fix: the belief that the U.S. cannot develop—and stick to—policies that work.
Neither allies nor adversaries expected perfection in Afghanistan. Mr. Biden was right to say that the end of a war is inevitably going to involve a certain amount of chaos, and world leaders likely didn’t anticipate a seamless transition. They did, however, expect that after two decades of intimate cooperation with Afghan political and military forces, the U.S. wouldn’t be blindsided by a national collapse. They didn’t think Washington would stumble into a massive and messy evacuation crisis without a shadow of a plan. They didn’t expect the Biden team to have to beg the Taliban to help get Americans out.
It all fuels fears that the U.S. is incapable of persistent, competent policy making in ways that will be hard to reverse. It seems increasingly evident that despite, or perhaps because of, all the credentialed bureaucrats and elaborate planning processes in the Washington policy machine, the U.S. government isn’t good at producing foreign policy. “Dumkirk,” as the New York Post called the withdrawal, follows 20 years of incoherent Afghanistan policy making. Neither the past two decades nor the past two weeks demonstrate American wisdom or the efficacy of the byzantine bureaucratic ballet out of which U.S. policy emerges.
* * *
I think everyone realises it was always going to be a mess, but the question is, was it more of a mess than it needed to be? Right now, while it's not great, the Taliban seem to be relatively keeping their heads down (yes, they're executing police chiefs and hunting down journalists, but they're not on a rampage as yet) so it's bad but not rivers of blood bad yet.
And I think that "yet" is what we're all waiting to see: once the Western forces and Western civilians are gone, and it's just the Taliban, what are they going to do?
I can't really blame the army for collapsing as it did; it seems (and again, I'm only going by what the news is telling me, and who knows how true that is?) that any of the officers who had personal power-bases before the US came along to prop up a puppet regime, or who are convinced anti-Taliban, have headed north or back to their own tribal bases to start a resistance, rather than staying and fighting with the army in the expected, conventional sense - and that's probably part of the problem right there, that they trust their own tribal alliances or their own picked men, rather than having any confidence in the troops under them and the army as a coherent national force.
For the rest of the soldiers, it's "stand and fight, but when the government is handed over to the Taliban - and they're already in discussions about handing it over - then anybody who fought the Taliban is for the chop" so of course they got out as fast as they could. Wearing the uniform is painting a target on your back. Why stay and fight for a regime that is already working on surrendering?
https://apnews.com/article/afghanistan-the-latest-d832920fb7b00d6d9bd5e54fd7fc3ef5
It's interesting that Hamid Karzai is still a power broker - so, did the Americans back the wrong horse with Ashraf Ghani, or was it a case of "at the time he suited our purposes"?
I've also read that the Afghan army wasn't getting food and ammunition.
See Carl Pham's comment below. You can decide to leave the party, and take your glass and your plate into the kitchen, and find the hostess and thank her for the lovely evening. Or you can sneak out without cleaning up or saying anything to anyone.
I'm trying to find background on Ghani and why he fled but Karzai is apparently secure enough to not alone stay, but be involved in talks, and reading the Wikipedia article on him is just one 🤦♀️ after another:
"He is also the co-founder of the Institute for State Effectiveness, an American organization set up in 2005 to improve the ability of states to serve their citizens.
In 2005 ...Ghani gave a TED talk in which he discussed how to rebuild a broken state such as Afghanistan
Ghani ran in the 2014 presidential election securing less votes than rival Abdullah Abdullah in the first round, but winning a majority in the second round. Following political chaos, the United States intervened to form a unity government."
I swear, the main difference I see between Ghani - who was conciliatory in speech at least towards the Taliban - and Kharzi - who actively fought against them, yet one has to flee and the other can remain in the country, is their relations with Pakistan: Ghani was cool to frigid with them, Kharzi was on good terms.
Which does sound more like Pakistan pulling the strings of the Taliban. Which is its own entire problem, because this is more smoke and mirrors - what you or don't do when you're negotiating with the Taliban may be an entire waste if Pakistan is in the background putting its thumb on the scales.
Sorry, misspelling the man's name, he's Karzai not Kharzi (no idea where that one came from).
I don't usually support Biden, but in pulling out of Afghanistan, he showed political courage and far sightedness. The fall of Kabul is often compared to the fall of Saigon, but the lesson of Saigon isn't that the US should have stayed bogged down in Vietnam for another 20 years. It's that the US should have pulled out much sooner. As Biden said:
"We spent over a trillion dollars. We trained and equipped an Afghan military force of some 300,000 strong — incredibly well equipped — a force larger in size than the militaries of many of our NATO allies. We gave them every tool they could need. We paid their salaries, provided for the maintenance of their air force — something the Taliban doesn’t have. Taliban does not have an air force. We provided close air support. We gave them every chance to determine their own future. What we could not provide them was the will to fight for that future."
Despite the overwhelming financial and military advantages the US gave the Afghan government, the Taliban overran the entire country in just weeks, taking half the country in just a few days. Why? Because 99% of the population supports Sharia. 92% of Afghan *women* support wife beating (https://www.prb.org/resources/most-women-in-afghanistan-justify-domestic-violence/). 79% of Afghans think you should be killed for leaving Islam. Why would Afghans support an immoral, corrupt, incompetent, foreign-backed infidel puppet regime with values diametrically opposite to their own? One that can't even deliver internal peace, the most basic requirement of any state? One propped up by warlords with child sex slaves (https://www.nytimes.com/2015/09/21/world/asia/us-soldiers-told-to-ignore-afghan-allies-abuse-of-boys.html)? The Afghan puppet regime had no legitimacy. The Taliban does, because the people support its brutal theocratic values.
The Afghan people got the government they wanted. If the Taliban ends the 50 year long civil war, puts an end to the child sex slavery, and provides even the most basic government services, they will have done more for Afghanistan than any government since at least the Soviet invasion of 1979.
Kind of a straw man there. The criticism by Sobchak, which mirrors the general criticism, is not the "what" (concluding the mission in Afghanistan) but the "how." President Biden asserts, laughably, that there was *no* better way to leave then the way we did, or else he moves the goalposts to suggest that those who suggest he royally screwed the pooch on the "how" are arguing about the "what." They're not, and it's dishonest to say they are.
So what better way was there to leave? The WSJ article puts forth no alternative proposal in the non-paywalled portion, even though it's 5 paragraphs long. The moment the US started withdrawing, the Taliban was going to go on the offensive. Judging by recent events, they would have quickly captured the country. It wouldn't have mattered if the US left in 2021, 2030, or 2085. There are people who say Biden pulled out too quickly, but they rarely say how long he should have stayed, if 20 years wasn't slow enough.
Biden did the right thing. A lesser president would have balked at the prospect of the Taliban capturing Kabul on his watch and kicked the can down the road, as the previous 4 presidents did, causing more death and destruction on all sides. Biden ripped the band-aid off quickly and took the political hit for the benefit of the United States--and ultimately, for the benefit of Afghanistan.
> So what better way was there to leave?
Well, at the very minimum, Biden could've ordered all of our equipment and materiel to be destroyed, rather than just allow the Taliban to take it. But that's a pretty low bar; in general, a more gradual withdrawal, covered by continuous airstrikes, would've been more effective (though admittedly more expensive).
Previous *four* presidents? I only count three - Bush/Obama/Trump. September 11 happened in the first year of Bush's term.
You're right, I was thinking that Biden is the 4th president to fight the war.
For start, do not leave helicopters and other equipment for talibs to cpature. Evacuate people they want to evacuate earlier.
Admit that Afghanistan failed and country would collapse, at least internally.
Though likely it would be rated worse by public than current mess.
The US forces attacking the Afgan army to seize the equipment they had previously given them certainly would be a controversial move.
I am pretty sure that just taking it would work fine. Though that would require admitting that Afghan army is a 100% failure.
Yeah no. I'm pretty sure CENTCOM could have come up with a plan that was loads better than what actually happened. Give me command of 30,000 Marines and associated equipment and even I could do it better.
What plan would that be? What experience do you have commanding large armies? Which wars have you fought and won? Just asserting that you can do better than the professionals of the world's most powerful army isn't very convincing.
”Which wars have you fought and won?" —
Man, that's a bold question
I have no experience at all. Which is why I said "even I could do it better." I'm 100% sure the "professionals of the world's most powerful army" could've done it a million times better -- had the President allowed them to do so. But as many stories in the media will tell you, he did not, so all their planning chops were in the event moot.
I think every person working for Central Command, from General McKenzie, could have individually come up with a better plan for withdrawal from Afghanistan; but the organization itself might not.
It's structural: The service chiefs by airplanes, ships, train personnel, do all the work of building a military.
(Another time, we can talk about how this is an incentive for the service chiefs to waste money on useless crap like LCS.)
Then the combatant commanders compete with each other to see who can get the biggest slice of that pie. Aren't they supposed to be competing with China and Al-Qaeda and drug dealers, instead of each other?
I know the seriousness of the charge I'm making (and I really want to emphasize this is a *Meditations on Moloch* style bad system, not a matter of any individual being dishonorable).
But the logic is inescapable. If Central Command's priorities had been twisted by bad incentives to maintain or increase justification for manpower and material Indo Pacific Command also wants, that is it incentive to spend less effort building an Afghan army that can stand on its own and more effort giving them support only we can provide like close air support or air-dropped logistical delivery.
Most importantly, it's an institutional motivation for CENTCOM, the United States military command responsible for Afghanistan, to always be putting off asking the Afghan army to defeat the Taliban; to always find a reason to stay; to never quite be willing to prepare to leave.
And I know that's a shitty motivation and doesn't make Central Command look great. But Central Command has been overseeing all US military activity in the Middle East for the entire time we've been in Afghanistan. If Im mistaken about the combatant commander having an incentive to prolong the conflict, that's means they we're trying to defeat the Taliban the whole time...and failed
What cities in the US would approximately meet these criteria?
Deal makers/breaks:
- Must have good weather (= sunshine) almost round the year (e.g., Miami).
- No snow.
- Must not be in H(awaii)ST (Also, e.g., Miami)
- Must be located in an urban or urban-adjacent location (e.g., San Antonio, TX)
- Must not be poised to get a lot worse climate-wise over the next 10-20 years (excessive heat, more storms, more rains, or floods).
- Must not sit on a geographical fault-line that could going to go boom someday "soon" ;-) (e.g., SFO)
- The local politics must not be the "bubble" kind. That is, it can be right-leaning or left-leaning - but (a) it should not be extreme in either direction and (b) it shouldn't be filled with insular people who have no idea how to get along with 'the others'.
- Must have fiber optic internet generally and easily available.
Strong preferences:
- Should not be in EST (e.g., Miami, NYC are all too far away from the tech-coast, time-zone wise. Might work though).
- Should be near a major or a minor tech center (there used to be only 2.5 tech-centers in the US namely SFO, SEA, NYC, but there are more now). If any of FAANG has a local development office, that's a very good sign.
- Should be cheaper than SFO, SEA, NYC :-)
- Should have direct flights to SFO and SEA (this is just a corollary to 'should be near a minor tech center')
- Should have highbrow culture (theatre, music etc.).
- Should have Amazon Prime easily available in most places ;-)
- Should be low crime, culturally diverse, and expected to sustain these characteristics over the long term (20 year outlook).
I bet Albuquerque NM fits most if not all of these criteria!
It snows in Albuquerque, it's not really a tech center (though Sandia is there), and it has high crime rates.
Have you thought about dangerous flora and fauna? I grew up outside of the range of most poisonous snakes, insects carrying dangerous diseases, and insects dangerous in themselves. The prevalence of hazards of this kind seems to be increasing over most of the continental US, partly due to non-native pests spreading, and partly due to warmer winters. And unlike larger dangerous fauna (bears etc.) the smaller dangerous pests often do quite well in (sub)urban areas - which is where you'll be if you want highbrow culture and a good internet connection.
So what's wrong with snow? Says the guy in Saint Paul. :)
I know. We are all so very different. I've aged out of winter camping in the BWCA but watching the northern lights after the sky clears with zero light pollution is pretty damn fun.
My body does run pretty hot though. The high summer humidity in the Twin Cities is a much greater burden than winter cold and snow for my particular metabolism.
There is a low lying area near my old home town where a state record -62F was set in the late 1990's. At the time I was working with an engineer from Novo Sibirsk. He had no idea that North America got that cold. Blew him away.
I've spent time outdoors in temperatures down to -50 F in northern Minnesota.
<Not Irony> The experience was pure invigorating fun. <\Not Irony>
At any rate, I wish you good luck finding your own climatic and cultural sweet spot.
Life is a bit more complicated (and correspondingly a lot more fun) when factoring in a significant other ;-) If I were single, this would be easier. If my spouse had no roots in the US and were an immigrant like me, perhaps this would be easier as well. If she didn't have strong opinions or a career to think about, this would be easier yet. If we didn't have to optimize for future needs like older in-laws potentially moving nearby, things might be simpler. And so on...
I've spent plenty of time in the cold, from childhood trips to Badrinath the Himalayas (https://en.wikipedia.org/wiki/Badrinath - I don't recall the oxygen levels being a problem, I was running around and didn't skip a beat, no idea why everybody thinks 10k ft is a big deal) to walking around in downtown Montreal on New Years Eve in -30C ;-) I've also spent plenty of time in Equitorial places in 105F heat. I've become soft and comfortable now, so that's what I'm looking for next :D
I’d love to see Badrinath myself sometime but my SO - wife of 39 years - has some strong opinions about what she is willing to eat. She needs to be within striking distance of a Whole Foods or Trader Joe’s to keep her tummy happy. Vacationing in India is a pretty hard no for her. Oh well, she has made me a very happy man for a long time so I wouldn’t dare complain. :)
Yeah, I’d say that as an individual you are ready for whatever the planet throws at you. Good luck keeping everyone else in your life happy. :)
Miami's weather in the summer (which lasts 6+ months) is pretty bad. The heat is oppressive and there are daily (brief) thunderstorms. A matter of taste, plenty of people do prefer it to colder climates.
lol no, Miami is not a serious option. Thunderstorms are ok, but the humidity turns out to be too hard for us.
sounds like you just described Raleigh NC! (though there is a tiny amount of snow)
Raleigh is interesting. It's a great idea and seems like an emerging tech corridor. I have to research its potential a bit though in terms of how it will fare over time. I think Atlanta has been getting a lot of recent tech investments/hiring in the south and I wonder if Raleigh is going to keep on being a university-city sized tech-town...
Raleigh will definitely continue to grow. Look up the AI/ML campus that Apple is building. UNC comp sci is great, as is Duke's department, and its a great place to live (driveable to NE and south, good airport, cheap COL, good housing, minimal culture war, close to the outer banks and the NC mountains, amazing in-state schools and tuition deals, good weather), so high human capital people will stay if there are jobs (and they already are). Google also has teams in Raleigh/CH because of all the comp sci profs. I would take the pair trade of Raleigh v Atlanta over the next two decades in a heartbeat. If I were going to start a co in a non-target city I would prob do it in Raleigh.
San Diego.
"Must not be poised to get a lot worse climate-wise over the next 10-20 years (excessive heat, more storms, more rains, or floods)."
Alarmist rhetoric greatly exaggerates the speed of climate change. Global warming so far, since 1913, has been less than one and a half degrees C. It may have speeded up more recently, but in 10-20 years it is unlikely to change the temperature of any city by enough to be visible through the noise of random variation. That fact is obscured by the tendency of the media to blame any unpleasant weather on climate change. Similarly, the high end of the IPCC sea level rise projection for the end of the century, as of the 5th report, was about half the difference between high tide and low tide — there are not many places where that makes much of a difference, and your concern is with a much shorter time period.
As annoyed as I am by climate science deniers, I do think the media has often exaggerated the threat. For instance, the common ECS estimate of doubling CO2 causing 3°C warming is something that is supposed to happen over hundreds of years, and the media often leaves out the "hundreds of years" part while failing to report the shorter-term warming expectation (TCR) which is more in the neighborhood of 1.75°C. I am more concerned with the effect of warming in more southern locations that didn't cause the warming - the Philippines, Central America, etc.
I am willing to put some money on a bet that the California climate will get worse over the next 10 years. Specifically:
* The summer temperature records will be beaten, then beaten again, several times.
* Seasonal wildfires will continue at their current level of severity, or increase in severity (until, of course, there's nothing left to burn)
* The current drought conditions will at least persist, and likely worsen.
* Air quality will continue to drop, as a consequence of the above.
I would also add that I perceive the climate where I live - very close to David Friedman - to have gotten somewhat worse in the 24 years I've lived here. I use air conditioning more. I experience more smoke days. (None noticeable in my first decade here.) I'm not sure whether drought being more common is climate change or coincidence - I moved here during an unusually wet year, which somewhat affected my expectations. But watering restrictions are now a constant presence.
It's also changed (for good or ill), such that plants that used to be grown commercially here no longer set fruit longer reliably enough to be worth growing locally. (Of course growers would probably have moved anyway, since (sub)urban growth means the land is currently more valuable with buildings than with orchards.)
What we haven't seen locally is catastrophic levels of change, though some of those burnt out of semi-local areas may disagree. People are not dying of heatstroke in extended periods of bad weather, except for the usual desert hiking mishaps (not all that local to me). People aren't being flooded out of their homes. We aren't having once in a century major storms. We're still only seeing those things as news coverage.
What isn't being grown locally?
The only thing I know of that A) was grown locally and B) has some trouble growing now is lady apples, but they were (and presumably still are - you see them in the grocery stores) grown in orchards in the hills, and the only place I've seen them have trouble growing is our yard (great harvest this year though - and most years, I'd guess our tree only fails to set fruit one year in three or so) which is... not in the hills. It's a significant climate difference!
There's definitely less good fresh fruit around than when I was a kid, but as far as I can tell it's mostly a "we need the land for houses" problem (so the farms get pushed farther out), not a "the tree won't grow/bear fruit here" problem, with an added-in "the Cosintino brothers retired, and none of their kids wanted the business, so the best produce shop closed despite being profitable" problem. (Why none of the local chains expanded to fill the niche I can't answer - my best guess is that sourcing good produce is hard.)
Are you thinking of a Central Valley problem, maybe? Because Silicon Valley was Prune Valley, and all kinds of plums and apricots still grow here really well - cherries too. It's just that's not the land's main value now. But I don't know the Central Valley as well.
I was thinking of cherries, and my understanding there was that they'd gone unreliable, though of course there was the double whammy of people wanting the land for houses, offices, etc.
I haven't studied the history, but...
We get a reliable crop from every cherry tree except one of the two Dad planted without knowing if they would bear in our zone (answer: one yes, one no) every year. We'll have a reliable crop from that one too when Dad finishes mastering grafting. I don't know that we're using the commercial varieties, mind, we have less variety in cherries than apricots or peaches, but my understanding is our sweet cherry was dead standard when we put it in, which was probably between twenty and twenty-five years ago - it was one of Dad's first. It's had various health problems - it's getting to be an older tree and we're a lot better at planting than at troubleshooting problems, especially fifteen feet above our heads - but fruit set has never been one of them.
It's possible someone was planting marginal varieties (but why? A variety that's seriously marginal now was mildly marginal twenty years ago, nobody but crazy amateurs* plants trees that won't bear even one year in ten), or that it's a problem in Gilroy, or somewhere else hotter than here - my Gilroy friends who have fruit trees do fine *but* the only cherry I know was bought recently so is presumably a modern variety - I don't know any 20-year-old cherry trees anywhere but here. But um, at least one point of evidence against.
(... and if you ever want to plant your own cherry tree, I promise you should be able to get a good one that will still bear perfectly. No such promises on apples, though; our successes are all weird so no good data on exactly how warm they go.)
*like us.
Water restrictions are more common for a simple and obvious reason: California's water infrastructure was designed in the 50s, mostly built between 1960 and 1975, and hasn't had any major addition at all since 1997. Meanwhile, the state's population grew from 16 million in 1960 to 30 million in 1990 to 40 million today. Simple math tells you the rest.
So population +25% in rather longer than the time period that concerns me (1997-2021), with these numbers applying to the whole state.
That could potentially explain a lot.
The *worldwide average* warming has been 1.5 °C, but there's much more warming over land than over water, and at medium and high latitudes than near the equator, so the amount of warming on temperate-zone land has been more like 5 °C, which is nothing to scoff at (it means that 35 °C days are as common as 30 °C days used to be, 30 °C as common as 25 °C used to be -- actually even more than that, as the variance has also increased as well as the average)
Er, the worldwide average warming has been more like 1.1°C with 1.6°C over land.
Generally those figures look much too high to me, but I would say scientists are bad at marketing, otherwise we'd be talking about the Paris target of 5°F land warming instead of the equivalent 2°C global warming.
I will continue to work on learning more about this, although I'd been under the (perhaps mistaken) impression that I was both well read and clearheaded about the topic of climate change.
Perhaps my time horizon should be longer than 20 years, and I should think about a 40–60-year time-span.
Sea level is not my only concern, although it's an important consideration. Things like drought/water table (and relative population vs water consumption, think problems in CA), abundance of electricity availability (think TX and local problems specific to its grids) etc are also of concern to me. Some of these can be overcome (TX for e.g, one can just install a generator) but others are harder to overcome (like drought).
If you're looking at shorter time horizons, drought is still something to worry about. The Colorado river has been overdrawn from for the past century, and the latest droughts (which may or may not be exacerbated by climate change) have not helped that situation.
On the other hand, I'm uncertain if that affects your average city-person (it seems like a much bigger problem for farmers). Water regulators might start metering and charging for excessive use, but I don't see Americans in major cities dying of thirst.
Buying a generator may keep your AC on, but a power outage means you're not getting internet. I'm assuming your career is computer-related (since you specified fiber optic internet as a deal breaker), so maybe put weight on avoiding counties/states that can't keep the lights on.
I'm pretty sure you ruled out everything. You'll need to relax your constraints.
Thanks - I'm open to it. For e.g., sporadic snow may be something I'm willing to handle. I live in the Pacific NW now and have live in British Columbia near the US/WA border in the past, I can easily work with some snow each year.
For e.g., I've been toying with Austin, TX and nearby places as a potential option. It meets most of the expectations, expect it gets some snow, power is a bit wonky but can be worked around if willing to own a home generator. Climate is the big question-mark in Austin and makes me hesitate.
As DLR has suggested, Tucson is a nice idea, and Melvin's suggestion of San Diego also seems like a good choice modulo wildfire smoke problems (which can be worked around a bit with full A/C in a house).
I used to think that Portland, OR could be a nice option, until Portlandia stopped being a parody :-( I used to really enjoy driving down to Portland, NewPort etc. to see friends and it was such a nice place 10-12 years ago. Sigh.
In terms of heat, drought, fire and cost of living, Oregon is well on its way to becoming Northern Northern California. To me that's much worse than it being a political/cultural bubble.
Tucson might work for you. Or Santa Fe.
Santa Fe probably gets too much snow -- a half dozen or so snow events, sometimes of a foot or so. I'm from Tucson and like it well enough, though it's pretty hot, bigger than I'd like, and water is an issue. The mountains are close enough for relatively easy summer day trips, which is nice.
Thanks DLR & Molly - Tucson sounds like a nice idea to research further.
Sunshine all year round plus no East Coast narrows things down to the south-west quadrant of the country pretty quickly.
I don't believe the climate will get significantly worse anywhere over the next 10-20 years since it hasn't over the last 10-20.
All the inland cities in the Southwest are too hot and culturally not great.
Los Angeles is too expensive (except in the really sucky bits) and too earthquake prone.
I think your shortlist might be San Diego or San Diego.
Anything north of LA in CA is likely to be negatively affected by wildfires in the coming years: dense smoke, power blackouts, freeway closures, and, of course, the actual fire (depending on location). These negative effects will be sporadic, however.
Thanks Melvin & Bugmaster. I visited San Diego years ago and enjoyed it, and my wife was on a conf trip and also liked it well enough. It has great Italian food which is a huge plus ;-) for us - we'll have to think about it some. We're in PNW so already familiar with fire+smoke problems - that's definitely a downside. Also worried about drought + water availability problems long term. I grew up in Chennai/Madras and never want to deal with water availability problems in my life again if I can avoid it (https://en.wikipedia.org/wiki/2019_Chennai_water_crisis was the culmination).
ACX has covered progeria and aging before, so perhaps some will be interested in a new paper that helped co-author that investigated the underlying cause of disease. Hope you enjoy!
"Nuclear membrane ruptures underlie the vascular pathology in a mouse model of Hutchinson-Gilford progeria syndrome" Link: https://insight.jci.org/articles/view/151515
How effective are vaccines against long COIVD?
I'm in the UK and, like most, received the Oxford-AstraZeneca vaccine. It seems pretty clear that the UK government has given up any pretense of trying to control the disease, despite the country being solidly in the middle of a third wave of infections. I'm fortunate to have a job that can be done from home, and to have had family and friends to live with (so I've not died of loneliness), but until the pandemic my life completely revolved around the swing dance scene: I danced and taught classes several nights a week, and spent most of my annual leave travelling to dance events around the world.
Obviously everything was cancelled, and for a long time, but now dance events are beginning to appear in my calendar again and I'm trying to do the risk/benefit analysis of doing what I love vs. staying safe.
On the one hand, when I put some (pretty conservative) numbers into microcovid.org, it's clear that partner dancing—even on an outdoor, fairly sparsely-populated dance floor—is a REALLY bad idea right now, even at a 10%-per-year "risk mitigation" budget. On top of this, I know quite a number of friends (including one of my teaching partners) who were unlucky enough to catch COVID back in ~March 2020 (despite precautions) and have still not fully recovered from it.
On the other hand, catching colds is an occupational hazard for partner dancers and a risk I have alway been happy to accept—and so far everyone I know who's caught COVID post-vaccination has been either symptomless or has had a minor illness for a few days and then been fine.
It seems plausible that if a vaccine provides a good level of protection against serious illness then it might also greatly reduce the chance of suffering long-term effects. What does the science say?
Hmm, the risk didn't seem too high to me when I estimated with microcovid.org (assuming outdoors, both people fully vaccinated).
Is making sure people take LFTs before coming an option?
Many event organisers have requested that participants take lateral flow tests before coming, but few have enforced it—especially for outdoor events that are open to the general public.
You must have some pretty different assumptions about what an outdoor dance looks like. My guesses lead to it reporting an eye-watering 4% chance of catching COVID in a single 2h event. This seems obviously wrong, since if the risk were that high I would certainly have heard reports of new dance-acquired infections from recent events here—but it is not immediately obvious where the computation has gone wrong.
https://www.microcovid.org/?distance=close&duration=120&interaction=oneTime&personCount=20&riskBudget=100000&riskProfile=average&scenarioName=custom&setting=outdoor&subLocation=United_Kingdom_England&theirMask=none&topLocation=United_Kingdom&voice=loud&yourMask=none&yourVaccineDoses=2&yourVaccineType=astraZeneca
I assumed fewer people close to you, MRNA vaccines which have a higher multiplier, and silent/normal level of talking (do you talk to people loudly while dancing with them?) LFT requests probably give a decent multiplier too, depending on how many people you think comply and to what extent noncompliance is concentrated in risky people.
> Do you talk to people loudly while dancing with them?
It's not unusual—especially if you haven't seen them in 18 months!
> LFT requests probably give a decent multiplier too, depending on how many people you think comply and to what extent noncompliance is concentrated in risky people.
One thing that makes me and some of my friends quite nervous is that, at the moment, the most cautious people are still staying home, while the people who are going out dancing are (by definition) less cautious—and therefore potentially at above-average risk of having caught COVID. A remarkable demonstration of this phenomenon was provided by one local dancer who had given a sample for a PCR test on a Saturday morning in preparation for international travel, danced outdoors on Saturday afternoon and indoors on Sunday evening, and then learned on Monday morning that they had tested positive. Fortunately they 'did the right thing' by posting about it conspicuously on one of our local FB groups, so that people they'd danced with could get themselves tested, for which they were quite rightly praised—but when it subsequently coming to light that this person had elected not to be vaccinated reactions were decidedly more mixed: most were astonished and appalled, a stalwart few applauded, and the bayesian reasonsers figured, "well, that figures!"
This scenario has all 20 people listed as less than a foot away from you, which is very implausible. More realistically you should enter this as two scenarios and add the risks: one person a foot away, and 19 people 6+ or 10+ feet away.
Fair point!
The base scenario (2h on the dance floor with average person 3m away) then works out to 1000µCov, and it's another 700µCov for 8 minutes (~2 songs) dancing with one person. If I dance with 10 people (about 2/3rds of the 2h event) that gives a grand total of ~8mCov, which seems plausible but still a bit risky.
I know at least two partner dancers who'd had 2x mRNA vaccines (likely sometime in Apr/May '21 timeframe) who got breakthrough infections. The best I can tell from social media, neither suffered from long COVID. One of them was in early 40's and very healthy/active, other not sure what age group - likely late 30's to mid 40's range.
That said, there is some evidence that (a) the rate was around 20% for Alpha and we can infer that the rate might be higher for Delta (https://www.thelancet.com/journals/eclinm/article/PIIS2589-5370(21)00324-2/fulltext), (b) we see that Delta's attack-rate is around 20-45% (https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/990339/Variants_of_Concern_VOC_Technical_Briefing_13_England.pdf) and (c) Delta's attack-rate may very well be similar for both vaccinated and unvaccinated people (https://www.cdc.gov/mmwr/volumes/70/wr/pdfs/mm7028e2-H.pdf, if you do the math, you'll see that the attack rates are about 23% and 24% respectively for vaccinated and unvaccinated people; I don't have systematic/large studies for this unfortunately).
(c) is a bit of a weak-link, but I've read a bunch of other anecdotal studies that the attack-rates might be similar for both vaccinated and unvaccinated cohorts, so I'd love to see evidence to the contrary - that would make me feel better (assuming the evidence shows lower attack-rates for vaccinated cohorts).
And finally I've seen some anecdotal evidence suggesting that long COVID happens irrespective of vaccination status. I've also read arguments suggesting that when someone is vaccinated, their high neutralizing antibodies will keep infection levels low and thus COVID symptoms would be minimal, and also means long COVID unlikely. The premise here is that COVID symptoms and long COVID severity are correlated. I haven't found good studies that substantiate this intuitively appealing and logical claim.
I meant by "the rate was around 20% for Alpha" that "the rate of *breakthrough infections* was around 20% for Alpha"...
What mechanism exists within the medical billing establishment to prevent hospitals, providers, or drug/device companies from generating arbitrary billing? E.g. anesthesiologists seem to always be sending us bills long after the fact, either from our children's c-sections or my umbilical hernia surgery, demanding payment. In the case of the c-sections I am certain the one claiming payment wasn't there. Likewise Apria just seems to periodically sprout new bills for a cpap machine long paid off every time I change insurance companies.
About as often as not we argue with the billing entity and they say "Whoops, I guess we screwed up", but the other times they threaten to send the bill to corrections. That of course has rather negative consequences down the road, even if you win in a small claims case. Anyway, getting off track, sorry.
So it seems that there is very little stopping a medical provider from just making up bills to send to people, either by mistake or malice, and very little one can do to fight those bills, despite there being no evidence that the bills are justified. Am I missing something here?
People do go to jail for Medicare fraud.
https://www.justice.gov/opa/pr/medical-biller-sentenced-45-months-prison-role-4-million-health-care-fraud-scheme just 1 example
I don't know how many resources can be brought against someone fake-billing individuals or private insurance companies.
I've downloaded the entire Covid dataset from [ourworldindata](https://github.com/owid/covid-19-data/tree/master/public/data), cut out all countries with <10000 overall cases, bucketed the remaining ones into three groups based on median age, and then made XY scatterplots, one data point per country, with X-value = total % fully vaccinated and Y-value = [avg new cases in the surrounding 5 day area]. (Both at a specific date only; I've tried 2021/06/01 and 2021/07/01.) The correlation is positive in all cases, i.e., countries with a higher % of people fully vaccinated have more cases. It remains positive if you add a 3 week delay to cases.
What's going on here? Anyone got a simple explanation? (Also, would you have expected this outcome?)
While I'd like to see clearer evidence for that claim, the explanation that comes to mind (if true) is that, time after time, whenever there are signs that the situation is getting better, there is a tendency for governments to immediately make things worse by loosening restrictions. In my home base of Alberta, for instance, they apparently saw that 50% of the population was vaccinated and decided that it was time to end the mask mandate (Delta? What's Delta?). As vaccination rates rose, they doubled down with new rules
> close contacts will no longer be notified of exposure by contact tracers nor will they be legally required to isolate — although it still recommended.
> The province will also end asymptomatic testing.
The conservative government backpedaled somewhat when Covid cases predictably shot up in response.
In addition to "the same countries can test and can vaccinate" point that other people are making, there's also the fact that some countries choose to double-down on the non-pharmaceutical interventions route (e.g. Australia notoriously has a slow vaccine rollout but a strong lockdown, I think Japan and South Korea might be in the same bucket?).
Since the vaccines reduce symptoms, maybe they are producing more unknowing carriers who then spread the infection to more people.
I think you would also need to plot in how much testing each of those countries do in order to get any kind of useful data. The countries with the most vaccines are also the countries doing the most testing. Lots people are tested in the US not because they have symptoms but because their job or school requires it on a weekly or even daily basis.
Also, countries that got the vaccine widely distributed also felt safe removing all the social distancing and mask-wearing restrictions (such as in the US), which essentially just let the virus rip among the many remaining unvaccinated.
Sure. Ask yourself what fraction of COVID cases are actually reported and available on the Internet in the United States versus, say, Iran, Mexico, or Burma. That will tell you why the countries with the highest vaccination rates are also reporting some of the largest number of cases.
[Doesn't check out.](https://i.ibb.co/4Rr2P4X/rc.png)
What is that supposed to be a chart of? Canada is X=6 Y=23 which means what? Whatever it is, outliers like UK might be having an outsized effect. I've been wondering why the UK has had so much Covid, any ideas?
It's X=percent fully vaccinated, Y=new daily cases per million, averaged over a 5 day period and with a 3 week lag. The lag was probably unnecessary as I said in the other comment (and UK is no longer a big outlier if you remove it).
I don't see how that graph says any such thing.
Actually it kind of does when you remove the delay, which is does probably not make sense given that the x-axis tracks fully vaccinated people. Nonetheless, the effect is not nearly as clear-cut as I was expecting before I made these.
Developed countries are able to track cases and vaccinate their population?
One more: maybe vaccinations have heavy effect in reducing deaths/long covid/etc but poor in blocking infections? Delta waves hello.
I've also created graphs for new deaths rather than cases, they all have a bit weaker correlations but not as much as you would predict. (Rich countries only and deaths gives a negative correlation, but not a particularly strong one)
Primarily this but maybe countries with heavier infections were more motivated to vaccinate?
A delightful example of Simpson’s paradox, with x = vaccination status, y = measured cases, and the problem being that measured cases goes up as measurement capacity and vaccination status go up with development. (If that’s why)
I've heard a lot of stories (including a recently popular twitter thread) about children claiming to remember past lives, down to factories they worked at, the names of their parents, or details about the past (a certain building that used to be a different building).
In Tibetan tradition, the Dalai Lama is chosen in part by having the candidate choose their toys from a previous incarnation. Of course, it's entirely possible that this process be manipulated. The number of adults who claim to remember anything about their previous lives is very small. A credulous defender may argue that society conditions people not to make claims like that.
My priors tell me that this should not be believed, but I'm curious: what's your take, not necessarily on reincarnation, but on memories from prior lives? What proof would convince you?
> [detailed] memories from prior lives
Possible if we're talking about ancestral memories, but unlikely per se.
More likely that the brain is doing something weird, like it does *all the damn time*. Like, simulating an alternative life isn't even that unusual, given stuff like dreams and reading fantasy novels, or how we rehearse imagined conversations all the time.
People remembering things that can be verified and not available right now. For example someone claiming to be a reincarnation of an ancient king and locating several previously unknown archeological locations.
Or digging out their stash of gold coins.
And this repeating multiple times. One can be an excellent archeologist and have weird spiritual beliefs so it is not foolproof. Or they could plant this coins earlier. But at least it is not fakable by looking at all photos.
But I would not bother with verifying it on my own, it is about as likely that Harry Potter was documentary.
I can't think of any way to differentiate between "Kid knows fact from 100 years ago that I can verify" and "Kid's parents own a history textbook"
I would require things not known currently (and not in the textbook) and verifiable.
Describing location of already found treasure? Zero evidence.
Ability to locate treasure (or some archeological artefact, or give specification of not yet known document)? That is some evidence.
This simple experiment would convince me: I whisper a secret into subject A's ear. I then kill subject A. Subject B then repeats to me the secret. This experiment could be ethically performed using those about to be euthanized or executed.
Problem is they probably don't remember all the specifics, and you don't know if their soul or whatever you want to call it, ends up in a predictable location.
Yeah, the purpose of this experimental design is for positives to be convincing. To correct for the low power, you could just repeat it a million times or whatever.
It would take a while though. You would have to create somewhat memorable but weird enough phrases and visuals so that they are uncommon enough. And you would have to keep it secret, which would be hard to do So every time a person is dying, you bring a posse of clowns and let them dance and say weird phrases. And then have those clowns sign non disclosure agreements and threaten to sue them if they tell anyone.
And then you would have to ask a lot of parents of children claiming past lives to try and get specific information, and match that against your secret database of those artificial experiences given to dying people. And if you get a high enough % of children describing these weird experiences, then it would increase the probability of past lives? As long as the sample of children asked is not large enough so that randomness could explain why some got it right by accident.
And I have put way too much thought into this already.
We don't really have any idea what things would be memorable post mortem, do we? So I'd just go with randomly generated "correct horse battery staple" type phrases (just making them long enough to make false positives extremely unlikely) and then have a web site where the first person to enter a correct phrase would win a billion dollars or whatever.
Interestingly the ones who have these memories never seem to remember having them later on.
I probably could not be convinced, as any information such a person could share that could be corroborated could also be gamed. If someone could teach them the information, then that's going to be the more likely reality. I suppose the strongest proof would be for them to describe something that no one living could have seen, for instance a sealed room or perhaps something extremely remote. The key would be that no one living could have known what was in the room/place, and then after the prediction it could be verified directly. Even that kind of scenario leaves a lot of room for someone to game it. In fact, that's something most of us would scoff at if a famous magician offered to do it. We wouldn't find it fantastical enough to even be interesting, as it would be so easy to fake.
In Western societies, neither materialist nor religious beliefs allow room for such a belief to be very likely. Materialist approaches would point out the lack of mechanism by which memories could be transferred. Christianity has very specific beliefs about what happens to your soul, to the exclusion of other possibilities.
I’ve been trying to frame Afghanistan in terms of the trolley problem. Here is my result, and I’d like to hear comments/feedback. Obviously, it has two semi-intentional features-not-bugs, namely (a) it only captures a subset of the scenario and (b) it’s a bit reductive.
The Afghani Time Travel Trolley Problem
=====
There is a runaway trolley barreling down the railway tracks.
Ahead, on the tracks, there are a few thousand civilians tied to the tracks, as well as a couple of hundred soldiers, and 45 bn US dollars worth of arms + supplies. When the train reaches them, most of the civilians will die, the goods will perish, the arms will be stolen by enemies, but the vast majority of the soldiers will probably escape only with injuries.
You are standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different set of tracks.
However, you notice that there are 20 million women tied to the other set of the tracks and unable to move. If you pull the lever, the trolley will switch to those other tracks and will head straight for them.
Instead of killing them, it will transport the women to 9th century. In the 9th century, these women will have to wear burqas, give up most of the common freedoms women have had in the 21st century, and become chattel slaves to the men in society. Many of them will die due to maladaptation and shock. But overall, most of them will live.
You have two options:
1. Do nothing and allow the trolley to kill a few thousand civilians, destroy 45 bn dollars of stuff and injure (or let die) many soldiers.
2. Pull the lever, diverting the trolley onto the side track where it will make near-slaves out of 20 million women and their female progeny.
Which is the more ethical option? Or, more simply: What is the right thing to do?
Isn't this just an elaborate way of asking, "would you sacrifice millions of women to save some soldiers (but not equipment)" ?
The whole point of the trolley analogy is to render an ethical problem down to its most basic terms; it stops working unless the setup is cartoonishly simple.
In real life, the decision-making process is never that simple, since you are forced to reason under uncertainty. All of your decisions are probabilisitc, and the tradeoffs are never crystal-clear. In addition, in this specific case, the decision is not between "save some soldiers" vs. "save all women", but rather, between "continue losing lives and money every day" vs. "hopefully save a few women some of the time over a generation if not a longer period of time". And this doesn't take into account the global geopolitical situation, long-term effects on terrorism, the ethics of cultural hegemony, etc. etc. You'd need to spawn a whole trainyard of uncertain trolleys if you wanted to illustrate this problem.
>The whole point of the trolley analogy is to render an ethical problem
> down to its most basic terms; it stops working unless the setup is cartoonishly simple.
@Bugmaster, thanks. This is the most convincing argument I've heard against the trolley-style framing.
For option 2, the Taliban also committed several massacres in peacetime, and it's reasonable to assume that would have continued.
For the $45bn figure, you'd be talking about the per-year cost I'd gather? It would still be understated, if so. Afghanistan was a >$1bn/week adventure, lasting decades.
For the returning-women-to-the-9th century, you mean returning them to the 20th century, right? 20th century *Afghanistan*, of course. Apples to apples.
And which fraction of the women are we discussing here, even? Yes I know that there has been a surreal two decades where some Afghanesses could enroll into Critical Gender Theory studies at university, the whole show essentially bankrolled by foreign taxpayers at an absolutely ludicrous overall cost. But even during those two decades those were mere oases in Afghanistan -- modern liberal versions of Potemkin villages, if you will. The western invasion did not change the mentality across the overwhelming majority of the country by *one inch*. If the astonishing speed with which the whole house of cards collapsed after the US military withdrew has not awoken you to this fact, I'm not sure what else could. You see, the Taliban are not genius military strategists, achieving this record-breaking reconquista with some new and unstoppable military innovations such as the phalanx or the blitzkrieg. They are simply an organized faction that's *way* more aligned with the mentality of the overwhelming majority of the population, than the western invaders ever were. They encountered virtually no resistance. The US leaving decent modern weaponry to a small minority before pulling out, has not made much of a difference.
For most women there, things will not change that much in 2021, simply because things had not changed that much since 2001. But at least now Western Academia can hold a string of parties to celebrate the fall of that bastion of vile western neocolonialism, the defeat of those horrible supremacists who openly held their culture to be superior to an equally-valid indigenous one, and who used mechanisms of oppression to impose their values on a subjugated people -- some of whom have even ended up internalizing the opressor's worldviews. (OK admittedly I might be wrong on that one, as I'm generally too dumb to understand modern Western Academia)
If we only consider women in Kabul (~2 million) and $60bn / year, then in purely monetary terms this gives $30,000 per woman-life-year of being able to study critical gender theory vs. living in the traditional Afghani society. This is within the range of things considered cost-effective in the Western world, although not necessarily cost-effective globally.
Are you quite sure about your denominator in the calculation which gave you the "cost-effective estimate by Western standards"?
How many of Kabul's female population (cca 2.3 million, ages 0 till death) would be able to go to university, regardless of the curriculum taken?
No, bar that; how many women *had* been going to university while the US-propped government was in place?
(sidenote: a UNICEF report that predates the Taliban take-over accounted 3.7 million children country-wide out of school, 40% boys 60% girls - in a country of 39 million people overall)
OK, bar that. Let's not even insist on university. How many women's lives were actually changed, in terms of them having an independent source of income from their work - even including women without university degrees?
The denominator will shrink to *quite* less than 2 million, I'm afraid, even with all those extra inclusions. And the quotient will increase to *quite* above the yearly salary of the average American who got taxed to make this happen.
I checked the figures on tradingeconomics.com. In 2000, during Taliban rule, 0.8 million women were employed across a population of 20 million.
By 2021, that figure has increased to 2.4 million women out of a population of 39 million.
Adjusting for population increase, the 2000 employment rates would correspond to approx. 1.6 million women employed out of 39 million.
That translates to about 0.8 million more women employed. At a foreign taxpayer cost of $75.000 per year per woman.
This *is* very high, even by western standards. Most Ivy League institutions charge less than that.
I am confused about the factual claims you're making. Are we looking at the same plot, https://tradingeconomics.com/afghanistan/labor-force-female-percent-of-total-labor-force-wb-data.html? It shows a jump from ~15% to ~22% of the workforce being female; I can't find a way to change it to absolute numbers (maybe I need a subscription to do that). To be consistent with your numbers, this would require the entire workforce to be about 12 million out of a 40 million population now (and 5 million out of 20 million population in 2000). That doesn't match my intuition for the Afghani unemployment rate? Tradingeconomics (https://tradingeconomics.com/afghanistan/unemployment-rate) asserts that unemployment rate has been somewhere around 10%, and the 2020 Afghanistan population pyramid (https://www.populationpyramid.net/afghanistan/2020/) takes ~25% of the population out of the workforce, but that still gives me (22% - 15%) * (75%) * (90%) * (40M) = 1.9M women who are employed now and otherwise wouldn't be. (At, therefore, foreign taxpayer cost of $32,000 per year per woman.)
As Scott pointed out during lockdown analysis, in the West medical interventions are considered cost-effective up to ~$100,000-$150,000 per quality-adjusted life year. That, too, is a very large number; part of my point is that very expensive interventions are sometimes considered cost-effective.
We're looking at different numbers. You're giving percentages of female-workforce / total-workforce. I gave the total-female-workforce numbers -- as said, 0.8M in 2000, 2.4M in 2021.
Ignoring the doubling of the population, you'd be looking at a delta of 2.4M [@2021] - 0.8M [@2000] = 1.6M more women employed.
NOT ignoring the doubling of the population, the apples-to-apples delta would be 2.4M - 1.6M = 0.8M, the figure I used.
You're making unfounded presumptions about which portions of the population pyramid do or do not engage in work in Afghan society. You might be copying criteria that makes sense in the west, and that criteria can wildly miss the mark in Afghanistan. Published unemployment figures are practically useless to determine labor force participation, even in the west.
My factual claims simply boil down to taking the figures representing the number of women in the labor force, as a representation of the number of women in the labor force. https://tradingeconomics.com/afghanistan/labor-force-female-wb-data.html
Scott's figures of $100k - $150k per QALY are taken *intra-society* -- i.e. if Society S has a product P per person per year, then we're discussing exchanging 2*P to 3*P for a QALY. In plain english, US has a GDP per capita around $50k, and two or three years of such product could be spent to give a US citizen one more QALY.
Fair enough. But perhaps more typically relating to the final few years of one's life "reclaimed from nature", than to an open-ended decades-long arrangement.
Afghanistan's GDP per capita was $330 before US started pumping in the billions, and went up to $570 after it did. No "k"s involved.
I'm not asking you to multiply those GDP figures by 3, I'm asking wherefore do you assert a duty of a US taxpayer to fund Afghan QALYs while applying a calculation that is based on the US economy?
The cost to maintain it wasn't really $75k though. See this graph:
https://c.files.bbci.co.uk/15D54/production/_97482498_ustroopsafghanovertime.png
A reasonable amount of stability was ensured with a much smaller fraction of troops as you can see on that graph. So I think the real cost is possibly at least 5x lower. And arguably even lower if you consider that those soldiers and military equipment might be stationed somewhere else now. It isn't like all those soldiers were just fired over night.
So if you consider the above, than the marginal cost of keeping the Taliban out of Afghanistan might be in the thousands of dollars not tens of thousands. Spread out over multiple countries.
brown.edu puts it somewhere north of two trillion, in an estimate that is slightly more substantiated than the graph you pasted above.
I agree that some of those expenses would exist even without the Afghan engagement, and there could have been some overestimations -- so I'm operating with about half that figure in my other comments.
https://watson.brown.edu/costsofwar/files/cow/imce/figures/2021/Human%20and%20Budgetary%20Costs%20of%20Afghan%20War%2C%202001-2021.pdf
The CRT comment sounds facetious at best. In my experience at least, people don't go from "I have nothing" to "I want self actualization", they usually go through a few intermediate steps that often looks like "I want to better my people" (I want to become a doctor/lawyer), "I want to gain *lasting* status/money" and so on. CRT in specific, and literature/history/sociology style humanities in general, rarely pave the path to enduring freedom for first generation oppressed individuals in 3rd world countries. This is not to say that art, history, humanities can't contribute to these struggles - they can - but investing in formal education pathways along art/history/humanities specialization is not how people (with very little, who've experienced duress most of their lives) tend to spend their one shot at betterment.
Even in an improved Afghanistan, I'm generally skeptical that society would allow girls to study beyond high school/10th grade very often. If you want to keep fine-tuning the cost-model, then you'd have to account for the fact that mere high-school education costs upwards of 75k USD/year for many girls out there under the "NATO/US militaristic help" model, assuming all of the military expenditure's goal is/was women's education (and nothing else).
I wouldn't model success in terms of degree acquisition alone BTW - that doesn't sound like a good measurement. To me, things like basic literacy are excellent indicators to look at. Next, female lifespan is another indicator (which ought to correlate well with increased literacy levels), and infant mortality rates (also gets better with high maternal literacy and well being), the role women play in informal employment sector (i don't know how we'd measure it but it seems important that we account for this), rates of child marriage (this ought to drop to count as good), education attainment levels in the next generation (increase in maternal education will have a positive impact across the board on the next generation) and so on. I don't have good references to show that these sorts of interdependencies ought to work, but these were all taken as a given in India in the 80's/90's when I was growing up there and formed the background tapestry of public discourse, and my sense over the years is that most of it worked the way it was posited.
But I also admit after all's said and done and some more clever math is undertaken, it could all still amount to a net RoI that's not very attractive to most.
It's fine if you wish to call my remarks facetious. I think I mentioned CGT not CRT, but I also mentioned I'm too dumb to understand modern Western Academia anyway. I don't want to linger on those remarks because they are hardly the bulk of my point.
The point is that forcing western liberalism as a value system has practically not left a *dent* on the mentality of the Afghan population at wide -- in spite of two decades of military engagement, plethoras of NGOs and insane amounts of money burned. In 2000, 8% of the female population was employed. Two decades and *over a million of millions of dollars later*, 12% are employed. I'm not that hard to impress, but I'm not impressed.
The point is how blind a large portion of the west is to just how *spectacularly* the project of imposing western liberalism has failed there. The cognitive dissonance is sometimes so bad that people have started manufacturing trolley problems which posit that no less than a full 100% of the ~20 million women in Afghanistan had been basking in the light of western liberalism, until the wild money hemorrhage has been stopped, causing them to return to 9th century Afghanistan. More like 20th century Afghanistan, and even that only for a fraction of the women; the vast majority of women weren't much affected by the western-liberalism bit anyway, but could occasionally be affected by the persistent-low-grade-warfare bit.
You say that Afghanis are oppressed individuals fighting for freedom. I'm not following, *who* do you mean they are they oppressed by? By western invaders? By some undemocratic forces? By themselves?
> Which is the more ethical option? Or, more simply: What is the right thing to do?
Haven't we just gone through a few centuries of colonialism and imperialism, and in retrospect, now see that this kind of thinking was wrong?
What is your confidence that how you've defined the two tracks is both precise and accurate? Frankly, I think it's near zero. Furthermore, does your argument not clearly extend to policing the whole world, and thus basically justify starting a new world war?
Finally, there are more than two tracks so your scenario is a false choice. For instance, it seems like it would cost significantly money and lives to simply help anyone who wants to leave Afghanistan. Then you can just leave it to the Taliban.
> or instance, it seems like it would cost significantly money and lives
Err, significantly *less* money and lives that is...
Ahead, fading into the distance, you see an infinite number of identical track switches along the current track, each one of them leading to either thousands of dead civilians, 45 bn dollars lost, and a couple of hundred soldiers maimed or dead.
The number of potentially time travelled women on the opposite track increases at every switch, but the tracks with the women always end at a trolly stop instead of another switch.
You can either let infinite civilians and soldiers die and let infinite money be wasted or you can at some point send some finite number of women to the 9th century where they will mostly survive.
Do you *ever* pull a lever?
>You can either let infinite civilians and soldiers die and let infinite money be wasted or you can at some point send some finite number of women to the 9th century where they will mostly survive.
I don't think we can discount their descendants from this, about half of whom would also be women in any given generation.
It seems pretty implausible to me that we can at all accurately predict what afghanistan will look like in even ~3 generations, and even more implausible that it would specifically be exactly what the taliban want.
How confident are we about how the switches in the trolley work? The classic trolley problem is good because a physical switch that moves tracks is a familiar device we can be confident in. The variation where we are told that a large person on a footbridge can be pushed off by me and will certainly stop the trolley involves overriding our physical intuitions about how these situations work (which may still be playing a role in our moral intuitions). With the Afghanistan trolley problem it seems likely that the switching circuits involve lots and lots of feedback loops, so that it's unclear whether option 1 will *also* lead to many women being forced to live in 9th century conditions, and also whether option 2 will *also* lead to the deaths of many civilians and the destruction of skyscrapers or other physical goods.
Many women live in 9th century like conditions today in parts of rural South East Asia and the middle east, yet we don't describe those parts of the middle east and SE Asia as a whole as regressive 9th century like societies. I concede that there is truth in your observation that #1 would also lead to many women being forced to live in a highly regressive society, but it would still be a flawed 21st century society with regressive features. Implicit in my trolly-problem framing is my fear and prediction that under Taliban, the society of Afghanistan would stop resembling a "21st century society with flaws" (at least for women) and become much more like a 9th century society. For me at least, there is a stark difference in values between those two kinds of societies.
You're also right about #2 potentially leading to civilian deaths. I was looking at this infographic by the UN https://twitter.com/UNAMAnews/status/1419531459220619265?s=20 for annualized historical casualty rate there. OTOH, the historical casualty rate there has actually dropped YoY since 1996 according to the world bank - https://data.worldbank.org/indicator/SP.DYN.CDRT.IN?end=2019&locations=AF&start=1996. Maybe going forward Taliban would have an adverse effect only on quality of life but not on death rate, or at least that's my guess.
How many of these trolleys do you have? One a year? One a century? One ever? (The last one seems optimistic in Afghanistan.) What quality of life do the women have while the trolley is running through the civilians, goods, and arms? (Civil wars, even 20th century ones, are generally not great to live through.)
I think this one got me thinking, and I'm going to settle on "one (over a long duration, equivalent to 'ever')".
In practice, what we've learned in Afghanistan over the past 20 years is that the quality of life isn't anywhere close to someone in a western society would come to recognize, but it was (is!) still much better than life under the Taliban before. At the minimum, for women, coerced marriages haven't been endemic (although I'm skeptical of reports that they're gone now - they still happen in places like India, so I doubt they're erased from Afghanistan in mere 20 years), women can go to school and work, and aren't forced to cover their faces and so on. I doubt these are universal personal freedoms (again, because these legal freedoms aren't yet universal personal freedoms in all of SE Asia, so Afghanistan couldn't have leapfrogged SE Asia in mere 20 years) but I think this was a significant+good start.
The way I see it, the trolleys themselves are metaphors for choices we make, and in this instance, they are about trying vs. not trying to improve conditions. The framing in Option #1 feels (perhaps is) overly specific to trying via one methodology (foreign, mostly US, coercion/"help"/troops). I don't know if direct militaristic intervention needs to be the 'try' approach, but that's the one I was most familiar with, and I felt like that's what would resonate with readers.
I'm trying to point out through the trolley problem that giving up on trying is much worse than not trying, or at least it shouldn't be so easy to pick option 2 over option 1, and anyone in favor of option 2 should really be in favor of a better "option 1" and saying "it's their problem" is really an awful thing to say - and because of those reasons we should work very hard to find different/improved ways of preventing Afghanistan society from falling all the way back to pre-modern times for women.
Now, maybe this point cannot be made as long as Option 1 stays in its current form (perhaps it's a socially/politically toxic option and presenting it in any form makes the discussion untenable), or maybe the lesson here is that the trolley framing isn't the right vehicle (;-)) for making this argument.
Is it a bad idea to get J&J after getting BioNTech?
I read an article in the Atlantic suggesting that overstimulating the immune system was bad.
https://www.theatlantic.com/health/archive/2021/08/covid-booster-shots-biden-8-months/619789/
It wasn't helpful to answer your question head-on, or a make other risk/reward decisions carefully, but it added enough uncertainty to make me wonder whether getting an early booster was really worth the effort.
I also read this in Science Mag about Hybrid Immunity which (I think) says that adding one mRNA dose on top of a viral vector dose like AZ is awesome - https://science.sciencemag.org/content/372/6549/1392.full.
Since J&J is similar to AstraZeneca, I'd imagine that the study translates well to J&J, but I haven't seen studies that start with mRNA and then add viral-vector later on - these studies all seem to start with a viral-vector dose, then add an mRNA dose on top of that. Maybe it's reasonable to assume commutativity, maybe it isn't - hard to say.
No. Why would it be?
No data on that combination, and I don't have a model of the involved biology
OK, well, it's not complicated. In both cases you've got a bit of nucleic acid as the key cargo, and in both cases the nucleic acid specifies the structure of the S ("spike") protein* on the SARS-CoV-2 virus, which is what docks with the human ACE2 receptor and allows the virus to gain entry to the cell. It's the "lockpick" that opens the door, so to speak. The idea is that you'll fool the vaccinated cells into building a crapton of S proteins, which they will express (show) on their surfaces, and which will by complicated mechanisms lead to your immune system realizing some of its cells are infected and need to be killed, and by the way we should be on the lookout for this S protein thingy, which is the marker of evil invaders.
The first importance difference between the two vaccines is that the J&J vaccine uses the DNA that codes for the S protein, while the mRNA vaccines (Moderna and Pfizer/BioNTech), as the name implies, use messenger RNA**. The DNA has to get incorporated into the chromosomes of the cell, where it is then transcribed in the usual way to your own mRNA, and then translated to S protein by ribosomes. The mRNA vaccines skip this step by directly flooding the cell with mRNA transcripts. (That does mean the vaccinated cell has its chromosomes messed up, but it's about to be killed by the immune system anyway, so it doesn't matter.)
The second important difference is the delivery vehicle. In the mRNA vaccines they enclosed the RNA in a lipid nanoparticle, a bunch of ordinary fatty acids plus cholesterol plus some stabilizing derivatives of polyethylene glycol, a common polymer used in stuff that goes inside the body. This gunk surrounds the mRNA, shielding it from immediate degradation, until the lipid shell makes the particle fuse with cell membranes, releasing the mRNA into the cell. On the other hand, the J&J vaccine uses a hollowed-out wild adenovirus (a common form of cold virus), #26 in the list of families, called Ad26. The DNA is inside the adenovirus, which gains entry to cells in the usual way such viruses do. The important distinction here is that if you have ever been exposed before to Ad26, then you may have some pre-existing immunity to that virus, in which case, it may well be destroyed before the vaccine has a chance to do much -- this has happened before, and is why J&J chose a more unusual adenovirus, to which fewer people have already been exposed. It's also why there is just one shot -- once you've used Ad26 as a vector, you can't really use it again, because by then by definition your patient has been exposed to Ad26 and any future vaccine based on it won't work very well.
It's also the case that the remnants of the vaccine are disposed of slightly differently. The remnants of the Ad26 are just random proteins, and get chopped up and digested in the normal way. The normal lipids in the mRNA nanoparticle do the same thing, but some of the less common molecules make their way to the liver, great detoxer, where they get oxidized into more ordinary (disposable) molecules over a few weeks (this was explicitly studied in rats before they tried it on people).
Key takeaways:
1) So far as anyone knows, all the foreign material is gone after 2-3 weeks, so there's nothing left of it by the time your second shot comes around.
2) The mechanism of immune provocation is essentially identical in both cases, after all the hokey pokey all you're doing is tricking cells into expressing S protein, which the immune system takes as a heads-up that it should be on the lookout for this weirdo.
3) There will certainly be slight to modest differences in how strong the immune response is, because I don't think the modified S protein they use is exactly the same, and the variations in how you are tricking the cell into making S protein. There's a bunch of subtlety about how the body does this whole process that is still unknown, and could fling some interesting wrinkles in, e.g. how well does one method defends you against variants, et cetera. But none of these things should cause evil synergy between the two vaccines, so far as anyone knows -- just a question of how much good synergy there might be.
------
* It's actually the code for a slightly-modified form of the S protein, which is stable by itself, as the normal S protein needs to be embedded in the virus outer coat to be stable.
** Actually they use a modified mRNA, which contains slightly weird nucleotides so it resists degradation, as there is machinery in the cell that would otherwise chew it to pieces in no time flat.
Thanks.
Sorry, I realize something I said above was ambiguous: it's the J&J vaccinated cell that has its DNA messed up, but that doesn't matter because it's a dead cell walking. It's also why we don't care whether the cell is deranged in some way by having its ribosomes hacked to manufacture a billion copies of viral S protein. The cell is going to get killed anyway.
Everything immunologists have been saying publicly about immune theory suggests that being exposed to any form of the virus (different vaccines, or different strains of the virus) should boost your immune protection, if anything, a bit more than being exposed to that form when you've already been exposed to that form. The only study I am aware of to get empirical results about mixing vaccines used AstraZeneca and BioNTech/Pfizer (https://www.nature.com/articles/d41586-021-01359-3), and they seemed to suggest that the mix-and-match in either order was about as good as two doses of the mRNA vaccine in terms of antibody response, and better than two doses of the adenovirus vaccine, with slightly higher fever-type side effects. I don't think the study was big enough to determine whether the rare blood clot side effect of the adenovirus vaccine was made more or less likely.
If vaccine passports remain important in many cities for a long time, and if Texas continues to make it difficult to access electronic records of our vaccination, then I have been considering getting a dose of Johnson&Johnson on a visit to a coastal city in order to have a good electronic record of a vaccine, even though I already have two doses of Moderna and a paper card recording them. So I would also be interested to know if anyone who actually works on this stuff has further thoughts beyond what I've gathered from media discussion.
Who all remembers the SSC posts, "Legal systems different from ours, because I just made them up" and "Archipelago and Atomic Communitarianism"? Inspired by those, and by the various ACX posts on ZEDEs, I set up a subreddit (r/archipelago)! Right now it only has a handful of seed posts by me intended to give a sense of the purpose of the subreddit, but I hope others contribute and it becomes a repository of many such ideas, ranging from the practical like ZEDEs to mere fictional amusements like the legal systems post.
Why? Some of us write such things for fun. Some of us write such things to explore the range of the politically possible, in hopes of developing ideas that will influence the future. I think it's potentially valuable for both purposes to have a publicly accessible collection of designs for alternative political systems. So if you agree, come on by https://www.reddit.com/r/archipelago/ and share your ideas!
Perhaps also consider including Scott's old series of posts on Raikoth: https://slatestarcodex.com/2013/04/15/things-i-learned-by-spending-five-thousand-years-in-an-alternate-universe/ and https://slatestarcodex.com/2013/05/15/index-posts-on-raikoth/
OpenAI Codex live demo. GPT-3 writes code!
https://www.youtube.com/watch?v=SGUCcjHTmGY&t=683s
Pretty amazing stuff, but does doing AI research cause early male pattern baldness?
n=1 for me it sure seems to!
Can't wait to see the post about the new health care transparency law and the wild data it's revealing about prices.
What law? I must have missed this.
https://www.nytimes.com/interactive/2021/08/22/upshot/hospital-prices.html
Oh wow - I had been seeing this headline when I glanced at the news the past few days, but hadn't actually clicked, so I didn't realize the information it contained was the result of a new transparency law! I had assumed it was just the NYTimes compiling numbers that insurance agencies had already been able to see for years. Since it appears this is newly public information, that could well shake a lot of things up!
(I noticed in an article about inflation the other day that health insurance was apparently one of the few expenses whose cost had been going up dramatically before the pandemic, but started actually going down during the pandemic - I wonder if this transparency law is somehow related.)
My guess was going to be all the elective surgeries that got postponed because people didn't want to go anywhere near a hospital if they didn't have to.
I would be surprised if a reduction in spending for one year would have propagated through to prices that quickly, but I know little enough about insurance pricing that I can't rule it out. It seems more plausible to me that changes in negotiated prices they pay hospitals would change their plans, and thus prices they charge for coverage, but saying it out loud like this does make me skeptical of this too.
New Parents / Expecting Parents discussion anyone? It seems like we've had a few in this group based on some past points.
Do you guys think there are certain innate differences in how babies respond to fathers vs. mothers? We have a 3 month old, and often she gets so tired that she starts getting more agitated and wound up, and as much as I try troubleshooting (diaper, food, burp, these are easy), I can't calm her down and get her past that high agitation state when she needs to sleep. My wife is very successful at it, even if it takes a while. She's had a lot more practice though, since she was off 12 weeks, and now she's left work.
Also daycare is such an incredible challenge, frustration and expense that we are now a one income household. With our baby struggling to gain weight and around the 6th percentile in weight at 3 months old, we couldn't stick it out with our daycare. They would barely try to feed her or deal with her little mannerisms that seem like she doesn't want to eat.
We live in a small city, but the wait time to sign up and expense of daycare is still quite bad, I can't imagine how it would be in a big city.
My experience (5 children) is that children respond differently to each parent, from the moment of birth, and it will vary all through their lives. There are times when you'll be the favorite parent, and other times when your wife will be, and I don't think you can predict it. It probably won't even have anything to do with what you or she do, so I wouldn't try to analyze it too far or try to change what you do -- it's probably just what's going on inside their tiny little heads, how all the neurons are jostling around, making and breaking connections.
It helps to cultivate a state of Zen acceptance about this stuff. Do your best, but don't let the occasional weirdness stress you out or make you doubt yourself*. And don't expect things to be picturebook perfect either. Children are little people, and they have all the quirks and individual strengths and weaknesses of people in general. Furthermore, probably only half the outcome, now or later, is dependent on what you do. A lot of it depends on their nature, and on what they do -- and even at a very young age, children start to make decisions on their own, at least a little independently of what you do.**
------------------
* Consider it training/prep for the high-school years.
** Which is ultimately a good thing, it's how children avoid acquiring some of the worst habits of their parents.
I was unfortunate(fortunate?) enough to have step kids who were past the small child stage and into high school before my tiny was born. I did not fare well with the very small stage (under 1 year) but am doing much better at the 3yo stage. The whole "I need IO that I can parse" is a very big part of what makes parenting possible for me.
Yes, I think most parents have the experience of being "better" at certain ages. I certainly feel that way. There are certain ages I feel I can handle very well, and others where I feel more than ordinarily stupid. I should probably ask my oldest, who are old enough to have a sense of humor about it, whether they see the same thing (about me). It would be interesting if they did, and it would be also interesting if they didn't.
When our (now two year old) daughter was little, my husband stayed at home with her while I went to work. It was still much easier for me to calm her than him, but I suspect that was breast feeding related. He would have to carry her around for a long period of time before she would settle, whereas I could just nurse her, and it was a much different experience than bottles. It was also much easier for me to put her in a soft wrap and walk around with her than him -- I think something about having already adapted to having a baby in that location? Which she liked a lot.
I've got another coming, and will see if it's different or not.
New father of 6 months...
My daughter reacts differently to my wife and I. My wife is better at calming her down for bed or to take a bottle if she's really worked up. That said, I'm not sure the difference is significant. We both do it reasonably okay.
My learning from the last six months has been: patience. Often she's crying her loudest and seems most inconsolable moments before she's asleep. I just grit my teeth and keep rocking her while continuously reminding myself of that fact.
We also (early, 1-3 months) did this thing that we just started calling "yo-yoing". She'd be inconsolable and then seemingly dose and then start screaming a few moments later and then dose again. Initially I would get super frustrated when she'd wake up each time, having just thought I was finally in the clear. But after a few attempts I noticed the pattern: each scream was shorter and each dose was longer until she was finally asleep. So I just mentally prepared myself for a half-hour to an hour of yo-yoing and suddenly it didn't seem so bad: each period felt like progress.
All of this was psychological and mostly just learning to be patient (I'm not). Shortly after she was born, I heard this great Sam Harris episode on "framing any given moment as your potential last time doing X". Using this mental framing helped me to wait out those long bouts of crying. Sure enough, I've already put her down in her bassinet for the last time because she just upgraded to her crib. I already wish I had appreciated it more the last time I did it.
All the best! Holy hell is it a lot of work, but I love being a Dad.
That Yoyoing thought makes a lot of sense. That's a good thing to consider. We go through the exact same thing.
Do kids react differently to different parents? I haven't seen a child who doesn't. My daughter didn't want anything to do with me for the first two years of her life. When my life left the house, she would scream from the time the door shut to the time my wife came back. My son was totally different. All he wanted as an infant was to take naps on my stomach while I took a nap in the recliner. They both grew out of it after a while.
In twenty years, you'll likely be glad your traded income for more kid-parent time. It sucks now, but take solice in your future self's preferences!
This was also something I noticed with our kids while they were little. For the most part I attributed this to the fact that, like your wife, I stayed at home with the kids which I think has two effects.
First, on the baby's side, I think there's just an innate recognition of Parent Who Is Always There, the effects of which are more pronounced when the baby is upset and/or sleepy, like the situation you described. With our first, it was basically impossible for my husband to calm her down or put her to sleep for the first year.
On the parent's side, I found that because I was with the kid all the time, I just really became attuned to every little thing that worked/didn't work. There were just little quirky things that I figured out worked for the baby that, when I tried to give suggestions to my husband, just made me sound like an insane person. ("She likes being held upright at first, but then you've got to shift to a cross-body cradle position once she gets drowsy enough. When is she drowsy enough? Oh, I have no idea how to tell you that. It's just something I can tell when it happens.... Also, once she starts closing her eyes, you've got to bounce every few steps.")
I also realized once my kids had gotten older that their difference in response to my husband also largely tracked with how their personalities developed. My oldest, who gave my husband fits, is now a very independent kid who is loving and affectionate, but a bit standoff-ish. She doesn't like being hugged too tight and isn't really "cuddly" by nature. Our second, who tolerated my husband much more than the first is now a very demonstratively affectionate kid who gives hugs and kisses left and right and still loves to be held and cuddled.
Kid #3 is still a baby, but he absolutely loves dad and it's very rare that my husband can't handle him when he's fussy and/or needs to go to sleep. Beyond what I mentioned above, I also attribute this partly to the fact that after a couple kids you're not nearly as neurotic about things and babies can tell when you're stressed out or relaxed.
Just hang in there! All of this is inherently stressful, and when you slap on weight gain issues as well, everything gets dialed up to 11. Our oldest was very low birth weight because of medical issues on my end (less than 1 percentile!) and had nursing/feeding issues because she was so small. It was a very rough first couple months. If it makes you feel better, she didn't get out of the 5th percentile till she was 9 months old! Once we introduced solid food she jumped from the 8th to the 17th to the 30th within 3 months. Before that, we went from 4th (2mo) to 3rd (4mo) to 0.9th! (6mo).
Very encouraging on the stories of different kids being different. Yes, we are still in that neurotic stage. Also, encouraging to here how your oldest recovered.
My wife and I are both very short. I'm near the 1st percentile in height for men, so I guess it shouldn't come as a surprise that our baby would struggle with that, but we joke that she's going to be tall and skinny because she's jumped up in her length much more quickly than weight.
Same age kid here. I think there are innate differences with response to mom/dad, but they probably aren't as big as you think they are.
I plan to hire a nanny come December (right now, baby is looked after by my husband and Mother in law during the day). My main concerns with daycare are illness + lack of individualized care: not enough people to comfort him right away when he's upset or needs help going to sleep. I would be much more open to daycare once my kid is physically stronger and able to socialize with other kids.
With illness, are you specifically looking to trade "illness now" for "illness later"? My understanding was that they will be exposed to about the same amount of different diseases over the course of a lifetime anyway, so with most childhood illnesses you only get to pick when, not whether, they happen. (Although there's definitely a few that are particularly common in small children, e.g. ear infections and croup are a lot more of a young child problem due to young child anatomy, so you do get to just skip them.)
The other thing yo be prepared for is that with a kid in child care, you will get sick a lot more often too. I went from getting a cold every few years to getting a few colds every year.
Of course now you've got to worry about covid as well.
Yep. I think it's better to postpone sicknesses for when he's older, because he's less likely to have severe complications and can emotionally cope better with being sick.
Makes sense -- my son's first cold (~3 months old) sucked, and when he was ~5 months and a visiting ~9-month-old passed off his cold to us, I remember thinking "I wish we were at the point where you can be that cavalier about a runny nose..."
Don't underestimate those 12 weeks your wife spent with the baby. It may be a matter of time and patience, and you may need to spend an equal amount of time soothing the baby so that she is just as comfortable with you as with her mom. Calming an agitated baby can be extremely stressful, and knowing that your partner can just take care of it will be a strong temptation. How you move forward will depend on you and your wife's tolerance for taking the time and arranging your schedules. My wife took on that responsibility in our household, and the kids were more easily soothed by her most of the time. I was still involved enough to be able to do it fairly well, so that I could step in when really needed (aka my wife on edge and needing a break). Our middle child got really bad gas around 11pm almost every night for months, and would be up crying non-stop until it passed. My wife and I would take turns trying to sleep and carrying around the baby. Oddly enough that's a good memory now, despite being a really hard experience at the time.
Yeah I agree she's had more time and that's a big deal. I tend to walk around with and burp our baby more often, but sooth her less. She likes being over my shoulder and she's got really good at keeping her head straight. I have to remind myself still sometimes that she needs head support because usually she doesn't pretty good.
I know this memories will be fond ones, and more I can share the less overwhelming it will be for my wife. Our baby's at the stage where she's doing a lot with her hands now, she whipped the glasses off my head yesterday.
Children vary a lot in how they respond to parents. My oldest was fine with either my wife or myself, and middling decent with other caregivers, but our youngest only wanted mom. With the younger I could, after a lot of time and struggle, get her to take a bottle or go to sleep for a nap but most people (grandparents, sitters) couldn't. We were super lucky to find a day care with a very experienced infant caregiver who could get her to take a bottle or nap but I'm not sure what we would have done if we hadn't (my wife probably would have had to stay home like yours).
I would mostly just advise you to cultivate extreme patience on the issue: after trying the usual suspects you mentioned just keep holding and rocking. Keep the head elevated as you do so to help with reflux. It may take you longer than your wife, and you may never be as good at it, and just remember that it may have more to do with the child's preferences than anything you're doing or not doing.
Some good tips. Good to hear some positive experiences of daycare. We got on the list at several places as soon as we knew for sure and only got a response a few months before my wife was supposed to go back to work.
I'm sorry your daycare didn't work out, for us it improved the parenting experience so much! (Mostly just because we both prefer the company of adults and computers to that of babies, but also, they somehow taught our kid to enjoy tummy time.) We didn't have to wait to sign up, but we were able to shop around (bigger city advantage?) and find a place we liked that was able to take us. We were doing this about 3 months before the kid, so about 6 moths before starting the daycare. In a big city on a coast, I hear you may need to start even earlier.
No idea on mothers vs. fathers, I don't recall a major difference with ours; some of the time the favorite parent is clearly the dog.
Good luck getting out of 6th percentile! We went through that when the kid was 1 month and discovered he just really preferred bottles to breast, which of course then made daycare much easier. (He went from 6th to 96th to just being a large kid.) We did have to experiment a little with bottle types, there was one he liked even less than the breast.
Yeah my wife didn't think she could do a good job with breast, so we did formula right from the start, but we've changed formula and now we are on the Nutramigen (expensive) stuff, and we've found an anti-colic bottle that seems to help her too.
Does anyone know from where the (Christian) position that after death you will get to know everything (from the meaning of life to who ate your sandwich one day) comes from, or starts at? Is it there in the Old Testament or is it some human universal or what? Sorry, I know nothing about religions, I grew up in a mainly atheist society built on a previously Christian one, but all through my life I've vaguely noticed claims that if you reach Paradise you'll Know It All.
There's a strong argument that the old Jewish and early Christian understanding of the afterlife is not the same as the modern Christian one. Prof. Bart Ehrman has written a book on this. Here's an interview about it: https://text.npr.org/824479587
Hell, there are dozens of mutually contradictory modern understandings
A version of this idea is already there in Plato. In the Meno, Socrates argues that since an uneducated boy is able to learn that the diagonal of a square is the side of the square with double area, merely by answering questions Socrates asks and without being told anything, therefore the knowledge must already have been present and just needed to be recalled. He generalizes this to suggest that when souls are disembodied (before birth and after death) they are in contact with the forms and therefore have all knowledge, and in life, what we think of as learning is mainly recollection.
This idea sounds similar enough to what you're describing as a Christian view, and I know many early Christian views were heavily influenced by neo-Platonism, that perhaps this is a relevant influence.
https://en.wikipedia.org/wiki/Anamnesis_(philosophy)
In the Meno, Plato's Socrates does indeed advance the theory of learning as recollection - but he doesn't present it as his own theory. Before introducing it, he says that he learned of it from "priests and priestesses" (ἱερεῶν and ἱερειῶν). This suggests that while Plato might be the first *extant* Western source, he is likely not the origin point for the idea, and that it has a deeper and earlier foundation in Western religious tradition.
Also, it's anybody's guess whether Plato actually meant to suggest that Socrates actually believed the theory. By the end of the dialogue, consideration of the theory hasn't helped them answer the core question of the dialogue - whether virtue can be taught. I think Socrates is stepping Meno through a number of hypotheses about knowledge, all of which he thinks are wrong, intending to make Meno dissatisfied with all of them; the theory of recollection is introduced primarily to lever Meno towards conceiving of knowledge as something that must be supernatural. The answer Socrates actually believes himself would (of course) be the Forms, as detailed in the Republic or the Phaedo. But that's just my own personally favored interpretation, and I think most people disagree with me and think Socrates actually endorsed the theory of recollection, so YMMV.
Weird. A friend of mine, AFAIK well versed in theology, once told me that (at least according to some) once a sin is confessed and absolved it is effectively "erased", so that in the afterlife nobody knows who ate your sandwich at all.
But of course I can't find anything to back that up right now.
If God has numbered every hair of your head, I'm sure He also knows who ate your sandwich.
In Dante's Inferno the spirits of the damned can only foresee what will happen in the distant future; their clairvoyance vanishes the more the events are near in time. They know nothing about what happens in the present.
Inf. X, 100-105
We see, like those who have imperfect sight,
The things," he said, "that distant are from us;
So much still shines on us the Sovereign Ruler.
When they draw near, or are, is wholly vain
Our intellect, and if none brings it to us,
Not anything know we of your human state.
IIRC that's only the heretics, not all of the damned.
There's a long-runnig debate about this issue.
From Hollander's commentary:
"Most believe that what he says applies to all the damned, e.g., Singleton in his commentary to this passage (Inf. X.100-105). On the other hand, for the view that this condition pertains only to the heretics see Cassell (Dante's Fearful Art of Justice [Toronto: University of Toronto Press, 1984]), pp. 29-31. But see Alberigo, his soul already in hell yet not knowing how or what its body is now doing in the world above (Inf. XXXIII.122-123)."
Interesting, thanks.
Wow, I just gave a very similar answer to a different question elsewhere on this thread! Check out 1 Corinthians 13, a very important Bible chapter. Particularly around verses 8-12. [https://www.biblegateway.com/passage/?search=1%20Corinthians%2013&version=NIV]
The thing that we call "knowledge" on earth is completely different from how things are in heaven. To the point that Paul (the writer of 1 Corinthians) isn't even willing to call what happens in heaven "knowledge" without a bunch of qualification. Earthly knowledge is full of complexity and distortion and misunderstanding, like looking at an object in a messed-up mirror. Heavenly knowledge is like looking straight at the object. Another example he gives is that earthly knowledge is like the incoherent babbling of a baby, and heavenly knowledge is like straightforward adult speech.
That makes it very hard for theologians and pastors to teach about what heaven is like, when one of Christianity's main sources basically says that earthly human thought is defective and is going to be replaced by something better. "The first thing I have to teach you about this is that anything I teach about it is definitely wrong."
The tradition I was raised in would probably say if you wanted to know it all you could, but when you get to heaven you aren’t concerned with earthly things so you probably wouldn’t try to find out.
Book of Revalations is probably the number one source for the eternal afterlife with God. My understanding though is that you do not instantly know everything; rather you have an infinite amount of time, and access to God, who does know everything.
Also, you apparently get to live in a Pluto-sized arcology constructed in space after Earth is destroyed, following a war in which trillions or quadrillions of people are killed by bombing (possibly orbital bombardment).
Or at least, that's the literalist way of reading Revelation.
Super interesting question. I'm an ex-orthodox jew but the same question could apply. My understanding is that the Old Testament doesn't mention the afterlife at all and its clearly a newer invention of monotheistic religions. I think the afterlife is sufficiently abstract and unsupported enough that every commentator could add their own flourishes all the time on what you get once you die. If I had to guess, I'd say that as religious commentators became more philosophical to match the philosophies of their time period's they got really into the idea that the afterlife meant "none of the limitations of the physical world". So not being held back by "time" means experiencing past, present and future simultaneously. Not having eyes means "you can see everything". etc. etc. This is obviously more of a guess answer and I'm sure their are actual experts here who can be a lot more specific.
I love this guy and I'm sure he'll have the best solution:
https://www.youtube.com/user/ReligionForBreakfast
The Hebrew bible does mention an afterlife, but very scantily. Daniel ch. 12, for example. However, the rabbinic speculation about the afterlife is rife. And one rabbinic comment (influenced apparently by the Socratic assumption) is that you know all of Torah before you are born and then an angel presses you above the lip (the philtrum- that little indentation above your lip) and you forget it.
A nice anecdote about that: as Emerson was dying he was losing his memory and Bronson Alcott told him we forget things of this world as we forget things of the other in the transition.
Casually, I’ve heard that “when you die, you go to god and are with god, and god is all knowing, and you talk to god, so you can ask god stuff, and it’s heaven so you have all good stuff.” For a history of heaven a brief overview is https://en.wikipedia.org/wiki/Heaven#Hebrew_Bible. Different Christian cultures and denominations and times and peoples develop the idea of afterlife and heaven and such in complex and different ways. There’s a lot of thinking and folk ideas and variation and differences there. I don’t know much about it
I have some doubts about the plan to throw so much money at the "global climate problem". Please explain to me where I go wrong or with which points you disagree the most:
1. So far, the world is ca 1 degree C warmer compared to the preindustrial era (leaving aside doubts about the precision of estimating preindustrial era temperatures). So far, nothing terrible is happening, extreme weather events are not more devastating. The damages as portion of GDP have plummeted and the number of "extreme weather deaths" too, despite the much bigger world population. I sort of don't expect following strong non-linearity to be true: compared to the preindustrial past, one degree up, nothing happens (or things even improve), another 1 or 2 degrees up, all hell breaks loose. Is there some magic number saying "this is the exactly correct global temperature"?
2. Some countries seem to actually benefit from modest, say 2 C, temperature increase. Canada, Sweden, Russia etc. There are surely some losers, but the net balance may even still be positive.
3. Even the IPCC, which overall sounds quite alarmist to me, estimates the GDP global loss from "untackled" global warming only at couple percentage points by 2100 (when the global GDP is expected to be 4 or many more times higher than today).
4. CO2 seems to be fairly beneficial in some ways: agricultural yields are estimated to be 15 percent higher compared to a situation with pre-industrial concentrations. It is the main plant food: some deserts are already shrinking, the planet is greening. Surely it is a good thing?
5. Global warming seems to be saving lives even now: the drop in deaths from cold is several times higher than the increase in deaths from heat. No one seems to care for this, though.
6. The green deals seem to be extremely expensive, with lot of the funds going as subsidies to inefficient technologies (solar, wind). If you argue that these technologies are already efficient, please explain why they need subsidies. This and new taxes have the potential to choke economic growth. I believe that my grandchildren would be better off in a world that is 4 times richer and 2 - 3 C warmer than in a world that is only 1 C warmer but economically stagnating. After all, rich Gulf countries have no problem operating in 50 C outside temperatures. And we are still talking about a difference of only 2 C, not going from 20 to 50 C. The necessary adjustment seems very minor to me, with plenty of time to adjust.
7. If things really seem to go bad, there are several geoengineering projects that might work. And these would be still two or more orders of magnitude cheaper than those green deals.
8. Many alarmist messages go hand in hand with subliminal or open "we have to end capitalism" or "our only chance is degrowth". Somehow I fear that for many people this might be the real agenda.
9. China, India etc still keep building coal power plants at a fast rate. What if they (as I expect) will not play along? Should the West economically suffer and let them outgrow us (when they emit the CO2 we save, undermining all the green deals of the world)? Will a much stronger China be kind to us?
Any thoughts?
> the plan to throw so much money at the "global climate problem".
Hol' up a minute. Other respondents have responded to your numbered claims, but what about this?
The plan? There's a plan? A plan to actually do something? There's "the" plan to do something? Where is it, and can I read it? What are the numbers?
I thought I was pretty well up with this stuff, but all I see is politicians trying to draw attention to themselves, without actually *doing* anything that would antagonise their donors.I haven't seen anything that might be described as a plan, let alone co-ordinated action.
For example two weeks ago the UK's minister for the enviroment said that the upcoming COP-26 talks in November were "the last chance to avoid catastrophe", and in almost the next sentence he announced the issue of new oil and gas exploration permits in the UK section of the North Sea. No sign of a plan there.
Regarding the temperature, 1 and 2, the problem is feedback loops. For example, snow is shiny, reflects a lot of light back into space. If the ice caps melt, the albedo drops, and then earth absorbs more energy directly. Another dangerous feedback loop is the permafrost: a bunch of frozen plant matter in the soil in Northern Canada and Russia that's been frozen for hundreds of thousands of years. If it thaws (which it allegedly will at +2C), all that dead plant matter will decompose into GHG, and the amount of mass we're talking about here is greater than the amount of GHG we've emitted so far. These nonlinear processes make specific predictions hard, but they point very strongly to "bad."
Re 7, I agree, but we still need to fund these projects, and that requires believing there's a problem in the first place.
Re 8, I suspect it's true, there are people who don't REALLY worry about climate change except as an auxiliary argument for their pet issue, be it socialism or some spiritual environmentalism. It's important, though, not to let wrong people cause you to reflexively adopt a different wrong position (like "we don't have evidence that climate change will be bad."). Some people are dumb and have bad arguments, but do you really want to shape your worldviews based on the bad arguments dumb people have?
I also have the impression that the bad effects are overplayed and the good effect underplayed.
But on the other hand, there are some effects that deserve to be called catastrophic, and that can not easily be solved by throwing money at them:
- The oceans become more acidic (carbonic acid). This will kill a lot of species, plain and simple. Even for species not affected directly, there might be secondary effects. The marine ecosystem might look really different from today if the earth is 4 or 6 degree warmer.
- The same might hold for land-based species, but here it's less clear what happens. Ecosystems might or might not collapse.
- Our civilization has thrown a lot of money and effort into optimizing infrastructure for the environmental conditions that we have now. Like, the population density is highest close to the coastlines, and the most valuable buildings are there. If the sea level really rises a lot, this means a lot of people will be displaced, and a lot of value lost. If some 100 million people from Bangladesh have to move, that is a problem that can threaten stability. If some parts of California just turn desert, this is a problem. The answer "California is just a place, we can move elsewhere" is not completely false, but has lots of ramification.
A problem is that the changes have a momentum of centuries. If we want to avoid changes, then it is not an option to wait and see whether they are good or not, and re-engineer climate if it becomes unpleasant. Once the Greenland ice shield becomes unstable, it's gone, and re-reducing the temperature will probably not change that. A lot of other things are like that. Once the jetstreams or the gulf stream are gone, it's very unclear whether they come back in cooler climate.
So it comes down to a gamble. I don't think that we know it's going to be desastrous. But the chances for desaster are non-vanishing enough that I would like to avoid them, even at high costs now.
Fair enough, surely one valid point of view. Just a note towards the acidification: oceans are alkaline, this "acidification" is actually only a tiny move towards neutral pH. People tend to imagine a sea of acid, a terrifying thought for sure.
I still agree that this change might be lethal to some organisms even so.
Yes, it's not going to be harmful to humans. But afaik, even the current changes are already problematic for animals which use calcium carbonate, which is quite common in the sea. (Sea shells, algae, sponges, ....)
On 6, governments subsidise a bunch of things for a bunch of reasons. In particular, fossil fuel subsidies are much greater than subsidies for renewables.
https://www.hrw.org/news/2021/06/07/qa-fossil-fuel-subsidies
Solar and wind are extremely cheap and effective. (I saw one discussion that it costs less to build and operate a wind farm than it does just to operate an existing coal power plant of the same energy output, but I can't find it right now.) But it's hard for them to compete against the level of lobbying, inertia, and financial support of coal.
But even if they weren't competing with a subsidised fossil fuel market, there's an argument for subsidising them to make the transition happen sooner. (Obviously this only applies if you accept that global heating is bad, which most experts do.)
I just took a look at the OCI Shift the Subsidies database which is allegedly the underlying data behind the linked article. I’m super unimpressed with this data source. Most oil companies globally are nationalized so, yea, they use their status as a state owned entity to get preferential credit. That doesn’t mean those projects wouldn’t get financed otherwise. (also, financing isn’t a subsidy). In fact, I suspect more projects might be financed given how inefficient state owned enterprises tend to be. These people have a huge agenda that taints the entirety of their data collection.
The Oil and gas industry is both immense and profitable. Probably because oil and gas gets used in almost everything we do and touch. You can critique the industry for all sorts of things but saying it wouldn’t be profitable without government subsidies is completely nuts. In the US, the government gets a share of the mineral right so it probably looks more like a net tax than a subsidy.
Coal plants are a hassle to operate. Nobody wants to buy them in the US right now. But they’re also way more effective at providing base power which matters a lot, like at night. They don’t need lobbyists to ensure their existence. Renewables need a base power value proposition and coal will be done.
As you can probably tell, I’m not very tolerant of the excessive excuse making by the renewable crowd. Just about everyone has bent over backwards to make wind and solar a thing and it’s time to either let them compete unfettered in the energy markets or move on to other technologies.
I never said fossil fuels need subsidies to be profitable (if the site I linked to said that then I apologise for not reading it thoroughly). But in general if you've got two competing products that would each be profitable on their own, then the amount of subsidies they get still has an impact on the relative market share.
I kind of take your point about the data gatherers having an agenda, but also, is there anybody without an agenda who's done this kind of analysis? If I'm right, and global heating is a big bad thing, and governments are doing too little too late, then anybody who looks at the data will come to believe that if they didn't already. When they present their data they will have a strong opinion and you'll be able to accuse them of having an agenda.
I agree that you need base power. I'm pro nuclear for that reason. Also putting batteries in houses.
I'm sure it won't surprise you to learn that I don't really think "the excessive excuse making by the renewable crowd" is at all fair, and the idea that anybody has bent over backwards... I'd characterise that as a few token drops in the ocean so that governments can look like they're doing something. But I'm not sure we'll get very far by trading perceived hyperboles.
If you have 100 hypothetical points to spend on wind, solar, nuclear. How would you allocate them?
100 on nuclear.
I'm sure there are a bunch of places where wind and solar are the best choice for the majority of electricity generation. But those technologies are already mature and competitive, so they have tons of investors and don't need extra help. The nuclear industry, on the other hand, was almost completely wiped out around 1980. It doesn't just need to be rebuilt, it needs new technology to survive in the modern regulatory environment. So on the margin, I'd put all my money on R&D for Generation IV nuclear, especially molten salt reactors.
I don't think I can answer that. First of all it depends on the country how important each is. Second, I think they need different things. Wind and nuclear suffer from NIMBYism. Regulations need to be eased to make it easier to build on-shore wind farms and nuclear power plants. One problem in the UK is that the government has a lot of MPs in rural areas with gentle rolling hills and conservative citizens who don't want their view to be changed. So you don't necessarily need to spend money on subsidising wind farms in those places, but you do need to have a government that's prepared to risk throwing some of its MPs under the bus for the sake of the country (and planet) as a whole. Of course their incentives make that basically impossible.
For solar, I want to see new build houses having more than a token 3m^2 of solar panels built into their roof. The government doesn't need to spend any money at all here: just impose regulations on house building. (I recognise this will increase the cost of that housing but I think it's worth it.)
I also think nuclear fusion is important in the long term. But I don't know enough about the state of the research to know if it needs any help at the moment.
I have many more thoughts on the precise ways to encourage those three areas, but I'll spare you the full version. But I hope I've explained why I don't want to talk about spending hypothetical points.
> I sort of don't expect following strong non-linearity to be true: compared to the preindustrial past, one degree up, nothing happens (or things even improve), another 1 or 2 degrees up, all hell breaks loose.
Wow. What if I told you that a glance at a phase diagram would demonstrate that thermodynamics is really *all* about nonlinear effects? If you're not convinced, maybe try an experiment: vary the temperature of an ice cube linearly and try to see if you notice any nonlinear effects.
Fair point, but still: I believe that there are some non-linear effect, and possibly even positive feedbacks (although nature is rather full of negative ones). Nevertheless, NOTHING bad happened so far (within some statistical noise). Why should we be suddenly at a tripping point? Such that ALL HELL IS GOING TO BEAK LOOSE AND WE ALL BURN TO DEATH (paraphrasing the geist of most media coverage that come to my attention).
Is there anything special about 15 C global average so that say 17 C is terribly dangerous?
Aside from the mass extinction (which isn't really a problem for humanity), the big issue is the amount of stuff near sea level and the amount of shift in crop-growing regions. It's not that the state of the world is better at 15C than at 18C, but that the optimal distribution of stuff in an 18C world is significantly different from that in a 15C world and "location of cities/farms" is difficult to change quickly without mass deaths (Siberia is going to become a lot more habitable and Bangladesh a lot less so, for instance, but "relocate all the Bangladeshis to Siberia" involves substantial retraining of farmers, construction of infrastructure and simple transportation difficulties).
WRT sea levels it should be noted that there are engineering margins of error basically everywhere; when those margins of error are exceeded, well, you're going to go from "not a big problem unless you get hit by a hurricane" to "a bunch of CBDs are underwater" fairly quickly. It should also be noted that sea levels are very slow to react - we've got +1C, but we haven't got the full +1C worth of sea level rise yet.
You're definitely right, though, that human extinction isn't going to happen from this. Even in the worst-case scenarios you're looking at ~1-2 gigadeaths; runaway greenhouse ("Earth turns into Venus 2.0, everything dies") has pretty much been ruled out (as in, "it's possible with sufficient amounts of GHG, but there aren't enough fossil fuels on the planet to make that much CO2, and the super-effective GHG like CFCs have been successfully phased out").
> "relocate all the Bangladeshis to Siberia" involves substantial retraining of farmers, construction of infrastructure and ...
... and Russia deciding it will issue millions of visas to Bangledeshis. But this is unlikely, as is Mexico/US/Brazil granting visas to Central Americans, etc. So anti-immigrant sentiment is one major reason I'm concerned about global warming.
magic9mushroom left a suicide note in OT203 on Dec. 26. So... bye again.
I'd say "Nothing bad has happened so far" is also an exaggeration.
Consider the level of drought and food insecurity we are seeing now, vs. 20 years ago, even with modern technology.
I'm involved peripherally with agrobuinsess personally, and shit is fucking precarious for lots of high value crops already. If rain becomes any more unreliable, we could see some staple crops in becoming likewise unreliable (in certain areas. EG, there is less snowpack on the sierra's year over year for 25 years, and if the trend continues it will be BAD for the worlds largest class 1 farming zone.)
Maybe these effects don’t show up in commodity markets but it seems like we’re doing much better than we were 20 or 40 years ago.
That's also not the case.
For example, coffee commodity prices spiked in July due to drought in SA, and will continue to spike in the future as Africa is also at risk.
These effects are small-medium now, but they are only going to get worse as we exhaust glaciers and water tables, even if we stop warming now.
Look at GSCI and BCOM. The market trades within a range and generally downward.
If you feel the future will be different, put on the trade. But you can’t cherry-pick a particular crop and use that to support some broader climate agricultural catastrophe that isn’t happening.
It feels to me like there's an intuitive disconnect between "1 degree is a small amount of temperature shift" (true in any of my day-to-day experiences) vs. "1 degree is a HUGE amount of temperature shift for a planet-wise average". In particular, when you say "another 1 or 2 degrees up, all hell breaks loose," you seem to be treating 1 degree and 2 degrees as similar amounts of temperature shift, whereas I think for a planetwise average they're really not.i
Another thing is that there's much less warming on water than on land, and much less warming near the equator than at medium and high latitudes, so each 1 °C increase in the global average is a rather larger increase on temperate land.
But why? Even the "planetary average" I believe is an ill defined concept. No single place on Earth cares too much about planetary average. It cares about the specific temperature it experiences. I have myself seen a change of - 44 degrees in 24 hours (Czechoslovakia 1979). We had some "coal holidays" because of that, the coal froze and was hard to unload from the freight trains. So heat and electricity was rationed. Lot of fun back then. Local plants and animals are still well adjusted to such wild swings, why would they suffer so much in say 2 C per century drift? Some things will change for sure, but some things will change in all other scenarios too.
I think e.g. rainfall is driven by much larger-scale patterns than what you would normally experience. If you held everything else constant but just raised the temperature 1 degree C I think many places would be ok (unless that 1 degree C went from -0.5 to +0.5 and now solid ice becomes liquid water), but I don't think everything else is constant, fundamentally I assume because somewhere on earth lots of solid ice is turning into liquid water.
After all, my understanding is that the last ice age was only 4-5 degrees lower than current average; 4-5 degrees lower than current also doesn't feel like that big a deal (I live in a place with >30C summers, subtracting 4-5 degrees doesn't exactly cause water to freeze), but manifestly was.
Point taken. On the other hand, rainfall patterns change all the time, adaptation might be much easier and cheaper. Because of more evaporation there would be probably more rain overall, which is probably mostly a good thing. Speaking of ice ages, well, there is a problem we should be eventually much more worried about imho. Maybe we actually should save the carbon, to use it for preventing the next ice age in the future.
1. Extreme weather events are increasing [IPCC AR6 WG1, SPM section A3, p11]
The optimal temperature is probably whatever temperature human civilization has developed at, since temperature change results in large changes needed. Also, on a personal note, I suspect the cost of a species going extinct might be enormous, since biotechnology has barely begun, and every time a species goes extinct, we lose a potential set of biotechnological tools.
A summary of positives vs negatives: https://skepticalscience.com/global-warming-positives-negatives.htm
2. The best information I am aware of is that 77% of countries will become poorer because of climate change. https://www.nature.com/articles/nature15725
3. The above study estimates global GDP might be reduced by around 23% by 2100. The IPCC probably wouldn't make a super specific estimate like that, since there is huge political pressure on them to be as conservative as possible. However, even though that may represent the total GDP loss - there will be many who are far worse off. People in hot, dry areas may be forced to seek asylum elsewhere.
140 million people might have to move by 2050 https://www.worldbank.org/en/news/press-release/2018/03/19/climate-change-could-force-over-140-million-to-migrate-within-countries-by-2050-world-bank-report
By 2100, 300-600 million people might be displaced by sea level rise alone https://phys.org/news/2021-06-areas-climate-uninhabitable.html
So, I don't know if you consider that to be worse than a percentage of GDP. But seems bad to me.
4. The extra heat cancels out the effect of CO2 fertilization. https://skepticalscience.com/more-co2-hurts-key-crops.html
5. That may be true right now, but not in the long-term (which is more important, since more people will be alive in the future than are alive now) https://www.monash.edu/medicine/news/latest/2021-articles/worlds-largest-study-of-global-climate-related-mortality-links-5-million-deaths-a-year-to-abnormal-temperatures
6. The Australian carbon tax reduced emissions by around 4% per year, even though they were previously growing fairly rapidly - during this time GDP was still rising by around 2.6% per year www.lse.ac.uk/GranthamInstitute/wp-content/uploads/2014/08/OGorman-and-Jotzo-Impact-of-the-carbon-price-on-Australias-electricity-demand-supply-and-emissions.pdf
Also, the risk of triggering positive feedback loops would be a pretty major cost [IPCC AR6 WG1, SPM section C3, p37]
Solar and wind are now cheaper than fossil fuels (https://doi.org/10.25919/16j7-fc07) however our electricity infrastructure hasn't been built for variable, distributed generation. We currently lack energy storage on our electricity grids to "firm" the supply. However, we can simply build large batteries - lithium batteries will probably get a lot cheaper very soon - this still will require public investment, but we can't afford more climate change. Or we could just use nuclear. Or we could just tax carbon like the vast majority of economists want to (https://www.econstatement.org/)
7. What projects are you thinking of, and what are their costs? Geoengineering technologies for climate change are all in their very early stages - have you considered the risk that they won't work?
8. It could turn out that those people are right, couldn't it? What do you think?
9. China's net zero commitment is super vague and the CCP has delayed climate action before (https://www.theguardian.com/environment/2009/dec/22/copenhagen-climate-change-mark-lynas). However, it's nearly finished implementing an emissions trading scheme, putting it a few steps ahead of America, Australia et al. But, in any case, climate change is a global problem - no one country can stop it. That's why international agreements such as Paris are so important for us to stick to, and not feel emboldened by laggard countries to be laggard ourselves.
8: It is of course possible that anti-capitalism would be the best system in certain contexts, at least (e.g. I'm sure that some form of fairly extreme socialism is optimal for *some* level of technological sophistication, population size, biome of government types around the world, etc., whereas capitalism would be optimal for others, almost no matter how you define "optimal"). But if that's the case, that should be argued on its own merits, rather than trying to make the (fairly weak and almost universally poorly argued, IMO) case that it is the ONLY solution to climate change.
Well, the heuristics is fairly straightforward: capitalism (market economy) works, nothing else does. Ever. Something else might hypothetically work in the future, when there is a total abundance of everything thanks to robots etc. My guess is though that without capitalism we will lose democracy - because the political class will be hundreds of times more powerful than today, deciding ALSO about the allocation of all the resources. Or some kind of AI, or a blend of the two. What could possibly go wrong, eh?
On temperature extremes, from Guzey’s best of Twitter:
https://www.worldclimateservice.com/2021/03/01/trends-in-temperature-extremes/
https://judithcurry.com/2021/07/15/heat-waves-and-hot-air/?amp=1&__twitter_impression=true
No clue myself if any of that is true, and he didn’t know either. But this purports to show extreme variation isn’t increasing
Great post
Just to pick a small part of this to demonstrate my frustration with these discussions:
"By 2100, 300-600 million people might be displaced by sea level rise alone"
Is that number based on humans doing nothing in reaction to rising sea level? Why would we do nothing?
I'm aware of the fact that Cape Town was able to reclaim several blocks of land where there used to be sea and this was decades ago. Hundreds of years ago the Netherlands performed miracles with water and land reclamation. It's just so hard for me to believe we won't be able to save New York / London etc with some engineering and human ingenuity. We're masters of using technology to adapt to the elements.
My main concerns remain: A. People not wanting to leave their air-conditioned homes and B. Loss of bio-diversity.
I think most cities that have "reclaimed" land from the sea usually took places where the sea was very shallow, added additional rock/soil/etc to raise it above sea level, and then built on it. It almost goes without saying that those locations were unbuilt before they added rock/soil/etc, because those locations were underwater.
I think the process of taking a place that was already build and raising it is much harder. The only example I am aware of is the Raising of Chicago in the mid 19th century: https://en.wikipedia.org/wiki/Raising_of_Chicago
I suspect that some cities might be able to do this with some of their relevant neighborhoods, but very few particular places are going to be worth investing this much effort in - the costs of this sort of raising would thus be even greater than the costs of displacement of the several hundred million people that live on land that will end up below sea level. (I suspect that New York, Miami, London, etc. will be worth investing in, but the hundreds of millions of people displaced are mostly in Bangladesh, Indonesia, and other low lying highly populated places.)
No, it's based on a few different projections of what emissions could end up being, from little action to high action. Obviously there are some big unknowns and uncertainties - this is always going to be the case for a topic like climate change, which encompasses basically everything on the planet. Read the study here https://www.nature.com/articles/s41467-019-12808-z
What proportion of the world has air-con (or would if they could afford it)? I live in Southern Europe and the first time I saw an air-con unit was when I visited America.
I hope to get back here when I have more time. Right now only this:
8. - as someone who spent half of his life in non-capitalist country (Czechoslovakia), I am absolutely sure, I mean absolutely, that you don't want to lose capitalism. It is the engine that makes all the money you need to eventually take care of the environment and everybody's needs in fact. When we got rid of the communist rule in my country, the environment improved incredibly, within few years. I even think that without capitalism you lose democracy (there was never any successful democratic country without capitalism - meaning market economy. Scandinavian countries have market economies, to be sure). As for degrowth - I believe that without growth you have a recipe for a social disaster. Everything becomes a zero sum game suddenly - good luck with that. From another angle, the more rich is a country, the easier it is for it to handle emergencies, disasters and problems in general. More growth means more wealth.
9. Chinese communists will do absolutely nothing to threaten their economic growth. It is a part of their social contract and power is all they seek. Also, they cannot be trusted, the communist assholes they are. I have seen this in and out, they can be forced to do something, at best. Do we have enough force?
I don't think your assumption regarding a link between capitalism and environmental improvement is correct. Between the 60s and the 90s, the US also vastly improved environmental conditions...got rid of acid rain, cleaned up water and air, etc., and that had nothing to do with becoming more capitalistic, it was because of better knowledge, technology, and regulations. China has improved too. I see no link whatsoever between using technology to implement environmental improvements and and whether that is funded by private or public ventures (indeed, there is often no profit motive there, so it usually has to be implemented by public investment or regulations).
He didn't say "capitalism is the only thing that can lead to environmental improvement." What he is referring to is the reputation Stalin and Mao's regimes had for astounding degrees of waste. When you remove the price mechanism for fuel, for waste disposal, for resources, none of the workers or middle-managers or project czars have any incentive for efficient use of resources. If they need more coal to build some bridge, they ask the party for more coal and it gets sent over. No one keeps track of how much resources they consume because no one has to pay for it.
We find, just empirically, that the price mechanism keeps everybody more on their toes about resource consumption.
From Poland. The problem with communism was that massive part of economy was completely dysfunctional, interest of anyone utterly detached from any sort of productivity.
So you got the worst parts of capitalism with no benefits.
> better technology
made available in Poland thanks to collapse of communism, the other two also but this was clearest one
> The Soviet whalers, Berzin wrote, had been sent forth to kill whales for little reason other than to say they had killed them. They were motivated by an obligation to satisfy obscure line items in the five-year plans that drove the Soviet economy, which had been set with little regard for the Soviet Union’s actual demand for whale products.
https://psmag.com/social-justice/the-senseless-environment-crime-of-the-20th-century-russia-whaling-67774 is just one of tiny example
> China has improved too
China economic systems is about as communist as I am chinese
> I see no link whatsoever between using technology to implement environmental improvements and and whether that is funded by private or public ventures
the problem is not "public" part, it is "dysfunctional as fuck" part (guaranteed in centrally planned economies that are detached from actually doing something useful)
8. In general, I agree that capitalist societies have done a lot better. I certainly don't forsee this trend continuing once AI becomes capable of replacing humans at nearly everything. But there is a major flaw in capitalism if the costs of climate change remain external to the market.
9. Although China certainly burns a lot of coal, decarbonization is also a major opportunity for them - they make a large proportion of solar panels and batteries. My feeling is that the best approach to tackling the CCP is to continue raising tariffs and slowing international trade with China, until their government agrees to democratize and respect human rights.
Just to expand on your point about richer countries handling disasters better, it often illuminating to ask the climate-alarmed which climatic extreme kills the most Americans and what is that death rate? You know, so we can compare it to the 30 odd thousand who are shot to death or the similar number who perish in road accidents.
Its floods which are the most deadly of all the climate disasters - about one hundred Americans lose their lives to flooding every year.
Such a relief that AR6 doesn't suggest that flooding is getting worse (despite what some headlines might misleadingly hint at)
"At 1.5°C global warming, heavy precipitation and associated flooding are projected to intensify and be more frequent in most regions in Africa and Asia (high confidence), North America (medium to high confidence) and Europe (medium confidence)"
AR6 WG1 SPM p33
So, if flooding does (eventually) increase by 10% might we expect an extra 10 Americans to die every year? No; because deaths from all climate phenomena have been plummeting, everywhere, for as long as there have been records. Extreme weather was a vastly greater problem in the past than it is now, and will be in the future.
To clarify, it will progressively become less of a problem.
That is what irritates me most: every weather event is "because of AGW", according to most media. As if these things did not happen before. It is ok to use statistics to prove the point, but they mostly don't do that.
If it was not for the media coverage, 99% of people would not be able to make any conclusion, whether it is getting warmer or cooler or whatever. Except for the subjective cognitive bias "this did not happen when I was young".
I've seen groups like World Weather Attribution quoted in news articles to justify linking extreme weather events to climate change. Some news sources might not, but I've definitely noticed it happening. This is a group that estimates what percentage of an extreme weather event can be attributed to climate change, check them out - https://www.worldweatherattribution.org/
This chimes in with Scott's thoughts on cost-benefit studies from the last links post. William Nordhaus estimated that the optimum level of global warming would be about 3.5 degrees C. Of course then you realize why nobody else has bothered to do a cost-benefit analysis - it doesn't fit anybody's narrative. Seeing as BAU is taking us in the vicinity of 3.5 degrees, we merely need to carry on doing what we're doing - 3 or 4 hundred billion a year in subsidies for renewables and all will be well. The 'Very Alarmed' simply cannot have that - as Gurdjieff nearly said, people will give up anything you care to mention, except their alarm. And the 'Very Sceptical' also simply cannot have that because it would necessitate accepting that 3 or 4 degrees of warming could be created by our carbon emissions.
You would think that with such a supposedly big problem with vast potential costs, the helpfulness of assessing costs and benefits would be great, but it seems not.
Even capping warming on 3.5 degrees requires substantial investment, though.
I am no expert, but what I´ve read about the topic strongly suggests that while countries formally commited to 2 degrees, there is no real expectation that this goal will be without some unexpected technological (or political) breaktrough. So we will get much higher rise in temperatures than planned, but lower than if we did nothing.
There have been a number of cost-benefit analyses. The most well-known are Stern (2006) and Garnaut (2008). Both found reducing GHG emissions had higher benefits than costs.
https://en.wikipedia.org/wiki/Stern_Review
https://en.wikipedia.org/wiki/Garnaut_Climate_Change_Review
I don't know Garnaut, but I know the Stern review uses some quite nonstandard assumptions to reach that conclusion - specifically it assumes that costs and benefits depreciate at 1.4% per year as you project them into the future.
The model Stern (and basically every economist) uses here is that money can either be spent on things we like now, or saved / invested and then we have MORE money to spend in the future. When we're talking about intergenerational transfers of wealth (which we are when we talk about climate change) then everything we spend now preventing climate change 'steals' money from the future that they could use for other things they want. So the question is, how much would money that could be 'gifted' to the future now via investment actually worth when the future rolls around? Alternatively, what do we 'steal' from the future when we spend money now on making the world better for them?
NB - Apologies that 'steals' is a bit of a pejorative term - I couldn't think of another phrase that meant that you're taking money from future generations without their consent, but in this case you're taking the money with their best interests in mind whereas most people who steal don't do so with good intentions.
Stern says that the answer is that we 'steal' 1.4% of the value of the money each year by spending it now rather than investing it. This means that Stern argues that the £100 we spend preventing climate change now would only be worth £400 in 100 years time (so if the value of the improved environment we gift the world is worth >£400 it is positive expected value to improve the environment). However, it is really not clear where this 1.4% comes from; the UK government uses 3.5% as a much more standard discount rate, which would imply that £100 spent now 'steals' £3000 from the future which is nearly ten times higher. Stephen Watson below suggests that the Nordhaus analysis (which I haven't read) uses 5% implying we 'steal' nearly £12500 from the future!
I ended up a big sceptic on Stern - discount rate is almost single-handedly the parameter which leads to his eye-catching conclusion, and yet it is different from the standard value given by the UK government for future planning, Stern doesn't really explain where 1.4% actually comes from and there is inadequate sensitivity analysis around this key number. It wouldn't surprise me if preventing climate change did have a positive cost-benefit ratio, but it felt to me like Stern decided that that was going to be his conclusion and made the data fit his theory rather than the other way around.
Note - discounting is not just about the rate of return on investment, but also about the relative importance we put on future suffering/pleasure compared to present suffering/pleasure. A 1.5% discount rate might be low on financial investments, but it is probably high to put on the suffering/pleasure itself - it seems quite plausible that 0% is the appropriate discount rate for the actual hedonic value.
I think the hedonic argument is plausible, but money really isn't hedonic value, because money can be invested. Even if we assume 0% hedonic discounting, in 100 years we could have 400x the money to spend on whatever hedonic thing we want. So suffering/pleasure doesn't become worth less, but it does become cheaper at a regular financial discount rate.
Sorry, I should add that it wasn't like Stern pulled the number 1.4% out of thin air - he argued (probably correctly) that the discount rate depended on the situation with the global economy, and that the global economy would react to different climate scenarios differently. So Stern didn't actually have any single discount rate, but rather a range of discount rates of which the average was about 1.4%. But then as per my original comment this is significantly less than the UK government projects while looking at the same data, so at least one of either Stern or the UK Government is materially wrong here, and Stern does a less convincing job of identifying where his numbers come from IMO
Nordhaus also found that reducing GHG emissions had higher benefits than costs....... But only down to 3.5 degrees C.
There's a pretty good chance Nordhaus has a flawed approach to discounting social costs and benefits - when weighing up intangible benefits into the future, he usually discounts them with a rate similar to the economic discount rate, 5% or so. Though there is considerable divergence on the issue, and many think it should be even higher, the most common opinion among experts nowadays is that this discount rate should be zero. The average is 2.6%. https://www.jstor.org/stable/pdf/26529055.pdf
Here's an amazingly thorough exposition of the issue, if you have a few months - https://users.ox.ac.uk/~mert2255/papers/discounting.pdf
This seems like more of an argument for climate change being good, as opposed to a criticism specifically of either climate charities and policy or say Stripe’s carbon removal pilot program? I share concerns about overreach here but ... this seems like a disconnected and relatively unsupported list of questionable reasons that barely tie together, so idk how useful it is to respond to them all.
1. Not an expert here, but there are petabytes of arguments about why climate change is, in fact, bad for the environment.
2. Not an argument? What reason is there to believe “the balance maybe positive?” That’s just a disconnected statement
3. A couple percentage points isn’t great! Normal policy stuff barely had the ability to increase GDP that much. Free trade agreements individually raise GDP by ~0.01% and that’s also probably the ballpark of most other policies. It’s especially not great for the people and areas the loss is concentrated in. But a x% GDP loss could mean anything from Beijing getting nuked (bad you would think) to the entire fast food, cosmetics, most of fashion, video game cosmetics, most of marketing, MLMs, and those sorts of industries getting outlawed, which would be awesome, to the pandemic, in between. But 3% GDP estimated loss, which probably isn’t that exact, could correspond to something bad. Probably worth directly discussing that instead lol
4. Again, not an argument. Yeah co2 increases ag yields by a decent amount, that is known and taken into account, well or not. Why would that be better than the harms climate change might cause? What are those?
5. Source? What? How am I supposed to understand this? The IPcC predicts lots of Serbs and displacement from climate change, likely on net, how am I supposed to even begin to think about why? Where can I learn about this?
6. Current expert consensus is that solar and wind are already price competitive in some areas. And they will continue to follow a learning curve and decrease. Are they wrong? Why? The GND might be too expensive and might decrease growth, but how much and why? How could it crater GDP by a factor of 4 relative to future?
7. Estimates? Huh?
8. Yes they are kind of dumb. But say stripes program or Biden’s goals don’t seem to include that?
9. Some theories are that solars price competitiveness will lead to worldwide adoption. And that US energy consumption has stopped growing due to much more efficient products.
For all I know, climate change isn’t real. But this doesn’t convincingly argue that.
Look I'm definitely no expert, just a guy with opinions. But I think a lot of people are underestimating extreme discomfort as a reason for concern.
I'm pretty confident general infrastructure advances can mitigate a lot of the bad effects of GW, and the whole "wars fighting over resources threat" always struck me as silly and unhinged.
But it's going to really suck if large parts of the world have unbearably hot summers where you sweat your arse off whenever you leave air conditioning. That might be hard to put a price on, but I'd argue global comfort is on par with ideals like "space exploration" and worth throwing a ton of money into.
Of course we also don't want to make a ton of wild animals go extinct either.
I think cheapish go-engineering is becoming more inevitable every day.
> But it's going to really suck if large parts of the world have unbearably hot summers where you sweat your arse off whenever you leave air conditioning
Right now large parts of the world have unbearably cold winters where you freeze to death if you go outside without heavy clothing. It seems like swings and roundabouts to me.
Even a three-degree change doesn't seem like a big deal to me. An "unbearably hot" day for me is anything above 40C; that happens a few days a year where I live, and maybe would happen twice as often if the weather were three degrees hotter... but that still doesn't seem like that big a deal.
How many people live in places where you'll freeze to death if you go outside without heavy clothing, compared to places with unbearably hot summers? I can't be bothered to do the math, but my guess is that the difference is at least a factor of three. Is it worth it if one billion people are less at risk of freezing to death if it means that three billion people are more at risk of dying of heat stroke?
My guess is a factor of ten the other way round - lots of guesses going on!
I'd add that if you look at the destination of people who move from one place to another, it is generally towards warmer places. People retire to Florida, not Alaska.
Also the vast (mostly) uninhabitated parts of the planet (Siberia, Mongolia, Greenland, Antarctica) are just too damn cold for Human Beings. After all, we live in an ice age!
What do you mean, other way around? Surely you don't think that more people live in cold regions than in hot? This 2000 graph suggests it could be as much as a factor of 50: https://www.themarysue.com/world-population-latitude-longitude/
To clarify, by 'cold' here I mean 'sub-5 degrees C on average'.
Another way to look at the graph you linked to is to ask 'What is the temperature of the places people choose to live in?' And the answer is - warm places rather than cold. Near the equator rather than the poles.
Why is the whole of Northern Canada almost completely empty and the Southern United States nearly full??
the annual deaths from cold are several times higher than the annual deaths from heat
If we didn't have artificial skin (clothes) and vast amounts of heating, the majority of people on the planet would simply die of cold. Before the discovery of fire, we could never have left Africa.
What is the difference in the energy spent on heating as opposed to cooling?
The average temperature of Earth is about 15 degrees and the optimum for humans is significantly higher than that.
My temperature concerns are generally reserved for the speed of change, not particularly the amount (up to Nordhaus's 3 or 4 degrees)
So maybe I guess the thing to calculate is cost of moving significant numbers of people to cooler areas.
Like sure if we redistributed everyone to Siberia that could be nice. But remember that most of the world population are in China, India, and to a lesser extent Africa. All have very hot climates already. In Hong Kong they have these cool air conditioned bridges between buildings to escape the heat but still get around but that's not a viable solution for most of humanity.
Also I'll add, I live in a hot country and don't mind super hot days either because I work in air-conditioning. But I enjoy camping and it's super important how many days of the year I'm able to go enjoy the outdoors with my kids. Less outdoor time for most of the world seems super sad to me. Even if everyone watching tv indoors doesn't have any concrete gdp downsides
My in-laws moved to Georgia a few years ago, and they originally struggled with the vastly hotter summers. Then they realized that they could just do more outdoor activities in the spring and fall. That's a bit of an issue with kid's school schedules, but Georgia specifically has more time off during the school year (two weeks in October and March or April IIRC).
Hell has a high gdp
Dumb Afghanistan question:
I hear some people saying the US "should have waited longer" to leave Afghanistan, and other people saying "they waited 20 years, what good would another few months have done"?
Is there some reason the US didn't spend the past year or so evacuating all civilians who they wanted out of Afghanistan? Right now it seems like it would be nice to have another year to evacuate people, but what have we learned that we didn't know a year ago?
Also, are people who aren't evacuated quickly enough really in danger from the Taliban? Why would the Taliban want to kill Westerners and anger Western powers? Isn't it in their best interest to let all the Westerners leave in an orderly fashion?
On May 15 the embassy released a second advisory reiterating their April 23 advisory that Americans should leave Afghanistan as soon as possible.
So far I think the Taliban have learned that angering Western powers and dropping your wallet on the street means you can't buy a cup of coffee.
To your first question - was that not the intention? As late as 12th August leaked reports were suggesting Kabul could fall in 90 days (and that was seen as alarmist) e.g. https://www.bbc.co.uk/news/world-asia-58187410 giving a reasonable timeframe to evacuate. Unfortunately even the worst case scenarios planned for were optimistic.
There will be disputes over things like foreign currency reserves, debt, sanctions and aid over the next months, and the Taliban will need to decide if they want to co-operate with the West on evacuations to show good faith, or go the Iran/North Korea blackmail route and seek any opportunity for leverage (e.g. de facto hostages). Who knows their approach but I wouldnt hang round Kabul to find out
There are already a lot of people dead. Im only seeing stuff about Afghani collaborators and nothing about US citizens, but i have little doubt that the Taliban would gladly kill US citizens if they werent worried about getting caught. If it helps, think of them as the Afghanistan version of the KKK and Know-Nothings.
From twotter, describing an "American mom".
"She was whipped by Taliban fighters on one attempt to get through, she said. A man standing near her was shot in the head on another try, leaving his wife and baby in tears.
Since then, she's been in hiding."
> Is there some reason the US didn't spend the past year or so evacuating all civilians who they wanted out of Afghanistan? Right now it seems like it would be nice to have another year to evacuate people, but what have we learned that we didn't know a year ago?
The military learned that a commander in chief was serious when he said he wanted to get out. They'd been able to persuade every prior President to stay and commit more money and troops.
I only spent limited time around senior leadership but here's my intuition:
>Is there some reason the US didn't spend the past year or so evacuating all civilians who they wanted out of Afghanistan?
Evacuation was never enumerated as a operational priority. Getting interpreters out has been a huge hassle for ages. For evacuation to be a priority someone very senior in the administration has to make that a strategic priority. It's not the kind of thing that happens via inertia. Lastly, I can't overemphasize the level of denial in considering the possiblity & timeline of Kabul falling. And staff officers don't make plans around scenarios they're collectively in denial about.
>Right now it seems like it would be nice to have another year to evacuate people, but what have we learned that we didn't know a year ago?
Learned about what? How to evacuate people? I would guess we're learning a lot about evacuating people and running a Berlin airlift style operation. I would also guess we're going to learn a lot about cohabitating in Kabul with the Taliban where you can't just airstrike them whenever they're visible because they're everywhere now.
>Also, are people who aren't evacuated quickly enough really in danger from the Taliban?
Depends who they are and what they did. Pashtunwali is a real thing so I wouldn't expect some Khmer Rouge style executions. But yea if you were an interpreter with a US partner force then you might get shot. Navigating Pashtunwali is complex and there are a lot of relatively cosmopoltian Afghans in Kabul that would rather not live under a redneck regime.
>Why would the Taliban want to kill Westerners and anger Western powers? Isn't it in their best interest to let all the Westerners leave in an orderly fashion?
I'm mostly checked out of the news so I'm very surprised if this is happening. A running question has always been how monolithic and organized the Tablan is. I have experience at a much more local level and my strong prior is that the new regime will probably be fairly orderly interspersed with some mob violence and maybe some symbolic shows of force - like killing some jews or christians or something. In the category of "a thing you're not allowed to say but is true" would be that the Taliban is often better at governance than the Afghan government. I'm not talking complex policy but just the everyday expectations of local government.
Ignorant opinion here: the moment the US started evacuating people, this would indicate a pull-out, and the collapse of everything (the army literally dropping weapons and scattering, the president filling a car full of cash and heading for the UAE) would have happened sooner. See this story from a pro-Ghani source: https://edition.cnn.com/2021/08/20/asia/afghanistan-ashraf-ghani-taliban-intl/index.html
""In the days leading up to the Taliban coming in Kabul, we had been working on a deal with the US to hand over peacefully to an inclusive government and for President Ghani to resign," he said.
"These talks were underway when the Taliban came into the city. The Taliban entering Kabul city from multiple points was interpreted by our intelligence as hostile advances," the senior official said.
"We had received intelligence for over a year that the President would be killed in the event of a takeover," the official added."
The pull-out or handover or whatever you want to call it was predicated on "there's a government in place, there are armed forces, they can step up and take over while we evacuate" but in reality that didn't happen. So even if the US forces had said "We are leaving over the course of an entire year", the only thing the native Afghan forces would listen to is "We are leaving" and they'd be out the door to save their own necks in seconds flat (and given the entire history of the country, who can blame them? It's a record of everything from "invaded by the Islamists, invaded by the Mongols, invaded by the Mughal, overthrew them and set up our own kingdom, then an empire, oops internal dissent knocked that on the head, to a monarchy, sort of invaded by the Brits they were peddling influence during the Raj, to a monarchy that was sort of reformist and progressive, to a civil war over those reforms, to a monarchy again, to a democracy part one: I overthrew my cousin the king and declared myself president, to a Communist regime, to invaded by the Russians who set up a Soviet-backed regime, to more civil war, to the Taliban, to invaded by the Americans, to a democracy part two: this time for real - yeah right the Americans are going home and that brings us up to date, with the only constant being that as soon as the new regime takes over, the followers of the old regime are for the chop).
I'm also seeing a lot of talk about this was deliberate on the part of some section or other of the American administration, that (take your pick of) the generals or the intelligence agencies or the hawks or the Deep State or the fairies at the bottom of the garden didn't want to leave and deliberately FUBARed the withdrawal in order to damage Biden and lead to calls for US forces to go back in. How true or not this is, I have no idea, there's a lot of gossip and rumour going around right now.
The obvious problem is that our evacuation would set off the collapse. The obvious solution is to have enough U.S. troops on site to hold the parts of Afghanistan that we need in order to evacuate our people for the month or so it takes.
Why wouldn't that work?
A) Some people will not start evacuating until after the evacuation window has closed. August 16th was not the end of that window, but it does appear to have been a significant wake up call that there were only a couple of weeks left. Less, now.
B) Is that not what we're doing? Everything I've seen indicates that US citizens in Afghanistan are nominally free to make their way to the airport in Kabul (plus or minus the general dangers of ground travel in the area), where they will find 5k US troops running the evacuation effort. That is planned to end with the month, but if Biden extends it I doubt the Taliban will seriously contest control in the short term.
(SIVs are a complicated debacle that ought to be addressed separately.)
I normally disagree with you, but I wholeheartedly agree here. On top of troops, we're pretty good at airstrikes, as well, so I don't see why our only two options must be "stay another 20 years" or "pull out haphazardly, leaving our local allies to die".
In Germany, that is exactly what the current debate focuses on, since a few weeks before the collapse, the government was divided in the question of whether to pull out civilian helpers.
They decided against it because the Afghan government begged them not to pull out, backed by the US and other allies. So for Germany, very concretely, this was *the* reason why they didn't evacuate civilian helpers beforehand.
Even then, pulling out 4 days before collapse is better than 4 days after IMO
I mean this is just like timing the stock market. Yes of course the ideal time to cut and run is right before everyone else does. But cutting and running is causally influential to everybody else doing the same.
No it isn’t, it’s like insider trading, which can work. Biden controlled both the time of troop withdrawals and to a lesser but significant extent the timing of US citizen and afghan staff and current refugee evacuation. The second was allowed to happen after the first because they expected Kabul and the rest of Afghanistan to fall slowly. That was a poor expectation, and should’ve been either better understood or just, for security reasons, ignored. And then the second should’ve happened before the first. And US troops are a stronger incentive to not invade than US citizens and kabulis running is a reason to invade
I just don't agree. To speak just about the Americans for the moment, everyone was advised in June that they should leave. Many people did. Think about the mindset of somebody already ignoring their government's advice to leave a warzone. These people weren't going anywhere until they felt personally threatened. Should we have forced them out at gunpoint? I think that would have been bad both for reasons of personal liberty and optics. Could we have sent them another memo, where we somehow convinced them that "no really, you should really leave, but the Afghan government is really totally fine, we think, don't tell any of the locals that you're leaving." I just don't think that's realistic.
As for our local friends, it's really a similar story, but even more obvious to see how we couldn't get them out without precipitating the collapse. This is their home. These people are probably friends and relatives of people in the Afghan government. They have families. There really is no "get these people out but otherwise don't accelerate the collapse of the government." This is magical thinking.
Of course, we could have had the military do that. We obviously could have said "forcibly evacuate all of these people," and gotten everyone out while troops guarded the whole city of Kabul and then evacuated themselves. I think, though, that we'd just be blamed for more directly causing the fall of the Afghan government that way.
This just all seems like Monday morning quarterbacking to me. If there's anything I want to criticize, it's that a 20-year nation-building operation couldn't stand on its own feet for a few days. Proportionally, criticizing the exit seems like a really strange use of attention.
They probably would have left sooner if the US had made clear that Kabul could fall within a week. And many Afghans would’ve left sooner if the us government had offered them transportation, sped up the visa process, and made that clear.
The collapse was gonna happen anyway. If we started with the evacuation, rather than ending with it, that’d be better.
But yeah this is somewhat low significance, but it does demonstrate that even to the end they didn’t know at all what they were doing
The null hypothosis is incompitance (systemic, not individual). I don't see any reason to move from that.
Answering questions in order:
1. Evacuating Civilians / People - The people we'd want to evacuate are the people who were responsible for maintaining the civil order we were trying to project in Afghanistan. I heard this from some buddies in country, so take it only as an anecdote, but as troop deployments decreased, compensation for private security over there guarding infrastructure and individuals got increasingly lucrative. As long as the nation wasn't actively enemy controlled, these people have no incentive to leave early and the US had no incentive to get them out early.
2. Are they in danger from the Taliban - probably not... The Taliban's incentive right now is to act as much like a normal government as possible for while, which means doing their best to present a unified front that can maintain domestic control and not indiscriminately slaughter people. That being said - the Taliban has actively begun hunting down people they believe to be "collaborators," non-western afghani's deemed to have helped the occupying forces (so, police investigators, military officials and the like) so they've got good reason to fear for their safety. Westerners are probably safe from mainline Taliban forces... but do you trust every component of the Taliban to have the discipline to hold to the party line of "don't trust westerners?" What about non-Taliban forces inside Afghanistan... how much would you trust the Taliban to protect you against random looters?
My take is that Afghanistan just got drastically less safe overall, and this is what's prompting the flee not some kind of specific threat the individuals are running from.
One consideration is that the bulk of the State Department people are doing a one-year tour, and they are often pretty isolated from the general population. Afghanistan is a pretty undesirable post, generally avoided by State department employees with clout. A bunch of short-term, mildly incompetent bureaucrats, and a mixed bag of incentives. It is not surprising that this ended poorly for so many Afgan civilians. The Americans don't seem to be dying in droves during this evacuation, and this is the main incentive.
I can confirm my experience with State employees has been unmitigated trash.
To be fair, they weren’t running the show. The military had/has lead. Doesn’t excuse how terrible they were/probably are though.
There was a zoo in Massachusetts that tried to stop a government budget cut by announcing that such a cut would cause them to kill animals. It didn't work, but the general principle is that to stop someone else from doing X you make the consequences of X as painful as possible for the decision maker. Intelligently preparing to leave would have made it politically easier to leave. The "deep state" didn't want the US to leave Afghanistan and so didn't take the proper preparations. Evidence of the deep state's desires is the surprising amount of media criticism Biden has gotten from his Afghan position, despite the fact that it could cost the Dems the California governorship and thus potentially control of the Senate given the advanced age of one of California's Senators.
The Taliban compete with other Islamic groups for global influence and funding. Killing fleeing Americans might be a "reasonable" PR strategy to use against their Islamic competitors.
How could it cost the Dems the California governorship? It doesn't seem like a California specific issue, and California is a solidly blue state.
It can't. Newsom is only in trouble because of COVID. My personal guess is that a lot of his Hispanic LA base is pissed off at the restrictions that caused their small businesses to implode last year. They might have been OK had it stopped COVID in its tracks, but it didn't seem to this winter, when there was a huge surge in LA.
That's really what it comes down to. He'll keep his white Bay Area FB engineer vote, but the question is whether he keep the brown LA house painter/construction worker vote. If not...he's toast. I think he'll squeak by, but it will be close.
Right now the recall election is polling within the margin of error as to whether Newsom stays or goes. If he goes, it's not totally clear who will win, but for many weeks a right-wing talk radio host was leading the polls (because Democrats were mostly planning to leave that part of the ballot blank) though now there's a left-wing YouTuber who has suddenly jumped in the polls because no one else in the race has a D by their name.
The recall election on Sept 14, 2021. My guess is that a lot of voters lump together all the politicians of a party and so hearing negative things about Biden lowers the opinion of many people of Newsom.
Definitely not an expert, but my sense is that it isn't US (or UK etc) citizens that are in danger, or who might not get evacuated. It's the Afghans who worked for America who are going to have problems.
It's probably a very unfair comparison, but the French who collaborated with the Nazis were treated rather badly when the Nazis 'left'.
Why is it unfair?
Because you might think there's a moral difference between collaborating with Nazis and collaborating with the US-backed government of Afghanistan, and it's quite possible that this moral difference will be reflected the treatment of these people by the new regime (though it's also possible it won't be).
My completely non-expert opinion is that it was an attractor effect. Once enough Afghani powers decided that the Taliban were likely to win, they switched to supporting the Taliban.
So instead of having a protracted attrition war, we ended up with a sudden switch that didn't even wait for the troops to leave. So, ironic enough, you can't even put a number on "how many days the Afghan government lasted by itself" - it's negative.
So there was no evacuation because the official plan was for a long term war instead of an overnight change of power.
But how could the US be so surprised by this? I don't think they were. I just think that the decision makers work at simulacra levels that are decoupled from reality. Once the official plan was to leave, you can't move information upstream that would sensibly change the plan. I just can't imagine how this would work, institutionally. From everything I've seen in the past years, including the pandemic, I just don't think US has the capability of changing committed plans based on object level, competent analysis. When did it last happen?
I recently read an article by the guy who planned the withdrawal for the Trump administration. He blamed the solid September 11 withdrawal date that Biden's team selected. The plan under Trump was to leave sooner, but with a transition-based criteria system. If the Taliban didn't meet their end of the agreement, the US was not going to leave. According to the article, that means the US could have stayed indefinitely under the old plan, but that the Taliban was cooperating in order to get the US out. When September 11 was initiated as the "have to be out" date, the Taliban lost any incentive to hold back, knowing that the chain of command would struggle to reverse, and may be incapable of changing the plan.
I've read other discussions that basically say that the Taliban has been quietly massing outside all the regional capitals in farms and surrounding villages for the better part of a year due to the Trump plan, ready to take everything the minute we had drawn down. Other articles point to the May~October period as the heavy fighting time due to weather constraints, so figuring that regional capitals have been falling for about a month, that build up could have been as recent as March or so of this year. Either way, it looks more to me like the Taliban were playing nice up through about the official Trump withdrawal date and the general govt collapse was predictable after that short of a 40000 troop deployment and about 700 billion dollars worth of new bombing campaign.
That's still in line with the article I was referring to, as everyone knew the Taliban would take over once the US left. What is in question is whether civilians, equipment, etc., would be evacuated prior to that takeover. We'll never knew for sure the alternative world where the Trump plan continued, but there's a plausible scenario where we held to a goal-based timeline instead of an end date timeline.
I just read a headline that says that the Taliban will not announce a new government as long as there is still one US soldier present. It gives credence to Mr. Doolittle's suggestion because even now the Taliban is holding back to some extent because there is still a chance, however minor, that US can reverse a course and attack them. I don't know what kind of deal was with Trump. Depends on details but I can imagine that if they promised something like only a partial participation in the government then that would be the deal that the Taliban would have to accept eventually.
If that's true, the Biden's plan was very stupid. I also think that the US should have stayed in Afghanistan until they were sure that some things they introduced, would stay. Throwing all that was achieved away is always easy, it doesn't require smart thinking or anything. But there are real people in Afghanistan and they deserved better.
> . I also think that the US should have stayed in Afghanistan until they were sure that some things they introduced, would stay.
I don’t think there is any way for the US to win a staring contest with the Taliban. Time is on their side.
Oh come on. Where's the National Socialist German Worker's Party these days? What happened to Tojo? The Brits certainly fixed the Boer's wagon, home guerilla advantage notwithstanding. There are no Nationalists left in mainland China, and no followers of the late President Thieu bombing the occasional railroad bridge in Vietnam.
It is certainly possible to win against a weaker opponent even if he's got the home-ground advantage, can melt into the countryside, is willing to live in caves and eat rats and don suicide bomb vests. It's entirely possible to wipe those people out, root and branch.
But it requires focus and commitment, and quite often some pretty ugly decisions. You have to be damn sure that's what you want to do. Being half-assed about it, and not entirely certain what you're trying to achieve definitely doesn't work, and never has.
Nobody who's paid attention to American politics for more than 25 years is surprised that a plan of Biden's was stupid. They guy has been somewhat of a joke for the past umpty-five times he's run for President -- and that was when he was younger and faster on his feet. He's always been just a good ol' boy Irish bullshitter, with very little of a clue. It's kind of stunning that he eventually made it over the top, and a real testament to how many people ended up disliking Trump, and how chaotic the Democratic primaries were in the midst of COVID. Even the guy who picked him for Veep (Obama) thinks Biden's a lightweight and pointedly declined to endorse him as a successor in 2016.
> Is there some reason the US didn't spend the past year or so evacuating all civilians who they wanted out of Afghanistan
I assume that people who made such decision intended to abandon most of local Afghanistan people who helped US forces (interpreters, cooks, translators, etc etc).
People on twitter have been saying that a) the State Department issued advisories months ago telling Americans to leave, but can't compel them to and b) repeating Biden's answer that starting mass evacuations of Afghans would lead to a "crisis of confidence" in the new government - essentially admitting defeat without a fight.
>a) the State Department issued advisories months ago telling Americans to leave, but can't compel them to
This is the part I keep coming back to - nobody went to sleep in the US and were shocked to find out that they somehow woke up in Afghanistan. The official State Department Level 4 "leave on a commercial flight ASAP" advisory was April 27th. By June 15th, the boilerplate language had been updated to:
>If you have concerns about your health or safety in Afghanistan, now is the time to leave. Commercial transportation and infrastructure are intact and operating normally. Strongly consider this option. If you decide to remain in Afghanistan, carefully consider all travel and limit trips only to those that are absolutely necessary. Given security conditions and reduced staffing, the Embassy’s ability to assist U.S. citizens in Afghanistan is extremely limited.
If someone chose to stay after that, they were signing up for the uncertainty of whatever came next. To the extent that those people now regret their actions, that is not a failure of US planning.
The problem with the "crisis of confidence" is that the US intelligence community knew the Taliban would retake the country. If we knew it, surely the locals did as well. You can't count on the Afghani soldiers holding out for a few months and then giving up, when they know that's the end game plan. They have every incentive to switch sides and avoid fighting at all, maybe getting a place in the government. It's one thing to think that the new government would hold, and then asking the Afghani military to do their part. It's another thing to know that it wouldn't hold, and still expecting the military to fight under those circumstances.
People do fight for the lost causes, when they believe in them. The US establishment bristles at comparisons to Vietnam, but in this aspect they are very apt. The Afghan government was an obvious sham, thoroughly corrupt, incompetent and alien to the native culture, only propped up by the US dollars and military presence.
However, publicly admitting this would be tantamount to declaring that decades-long American policy was a colossal mistake, so the kayfabe had to be maintained in defiance of common sense and humanitarian concerns.
It was a colossal mistake but theoretically they could have supported more grassroots government even if it meant to include the Taliban. It seemed that the US pushed too much their values. The US embassy in Afghanistan had a LGTB support announcement very recently. If you think about how controversial it was just a few years ago even in the US. I am sure even the most progressive Afghan society would be still very biased against those things and we pushed too much of those and too little of basic human things, for example, that better court system, better economy, better healthcare and even education is beneficial to everyone, including the Taliban itself. Their are many Islamic countries who uses all those benefits while being very conservative like Saudi Arabia etc. It is not what we think is optimal but it is much better than Afghanistan. But we had no such perspective, we didn't value what they could achieve in those aspects and therefore we threw away everything.
I'm not confident I know how the US intelligence community works well enough to say who knew what, and when. But by analogy from covid-19 response, I'd say that there may be a wide, perhaps insurmountable gulf between "someone in some government agency predicted X would happen" and "the government knew X would happen well enough to operationalize that knowledge". I think it's too easy for politically inconvenient knowledge to get buried especially when leaders have incentive to ignore it and listen to the other advisors who claim to know differently.
Can you find anyone in a serious position within the US government who thought the Ghani-led government would remain in power after the US left? I've seen lots of predictions about how long it would hold out, but nobody seriously thought they would be around a few years from now. Most that I've seen were hoping to make it to the end of the year. Even a year ago the Taliban had taken control of so much of the country that the Ghani government couldn't be said to actually be in control of the country anymore. The steady Taliban gains throughout 2021 should have made that obvious to people months ago, when there was still time to make changes to the plan.
I used a program where you type in something and an artificial Intelligence makes art based on what you typed. I typed in a bunch of movie titles and made a quiz. Can you guess the movie titles based on the art? This will test how well you understand how an AI thinks. Or how many movies you watch. There are no sequels or prequels.
https://thorsbyprojects.tumblr.com/post/660029925870518272/ai-generated-art-movie-quiz
Also here is the same thing for tv series: https://thorsbyprojects.tumblr.com/post/660032539928510464/ai-generated-art-tv-quiz
Did you just type the movie titles? It looks like the AI has seen lots of the publicity art for these movies online, and has only been diverted from that in cases like Gladiator, Cleopatra, and Elephant Man, where the words in the movie title are common enough outside the movie that generic images took over. (I'm a bit surprised that Labyrinth is so associated with David Bowie's hairstyle rather than a Minotaur though!)
Just the titles, though it produces a series of images, and I tended to choose the easiest image. Except sometimes most of the title would appear as text, too easy.
The hairspray bottles really seemed to me that they were trying to say HAIRSPRAY!
That was fun! Thanks!
In general I'm very sympathetic to Scott's views on education. I don't believe we've got any system that teaches kids more than basic numeracy and literacy. But I'm curious which of the following people think more plausible
1. People are plastic and develop intellectually in any stimulating environment. If they focus their attention on one area they'll develop pretty much to their potential in a few years.
2. We're just awful at teaching anything and getting people anywhere near their potential.
With than in mind, what level would Magnus Carlsen reach if he picked up chess at 15? Johnny von Neumann math at the same age? Simone Biles gymnastics at a decrepit 10?
You can begin to sketch an answer for (2) with birth month data among Canadian players in the NHL. Since youth hockey might start at age 6, the arbitrary calendar cutoff pits 6.99 year olds against 6.00 year olds. Consider the following toy model:
OT = -c1 * (23 - a) + c2 * t + c3 * r
Where 'OT' is observed talent, 'a' is age, 't' inborn talent, and r is both the quality and quantity of training received. This assumes that you will come into your full hockey powers at age 23. If c1 is big enough compared to c2, and r is zero, the difference in observed talent between a 6.99 year old and a 6.00 year old is huge even if they have the same inborn talent.
The fly in the ointment here is that training is a scarce resource. There are only so many good coaches, good opponents, and ice time. As a consequence of this, we give out the best training resources at age n to those who were better at age n-1. And this tracks all the way to the NHL level where Canadians born from July to December make up just 42% of the players. So even if Carlsen had it in him to pick up chess at 15, if he needed any help at all to do so, it would have been very tough to get. But this would have come from "The System" not Carlsen himself.
One could make the argument that we should not structure society in such a way that if a talented person fails to board their train by age 6 it's impossible to catch up. But that's not the one we've built.
I see your argument, but it also seems completely consistent with (1). Like there are plenty of people out their with the talent, and they could be developed even at advanced ages, but our sieve system doesn't admit that.
It seems like more of a criticism of the system than a means to choose between 1 and 2.
Largely (1), in my opinion with some nod to the general interest level of the student in question. Honestly, this is sorta what Mike H and Marvin and the rest of the responses seem to be saying -- (2) is true if you're asking can we force knowledge into someone and for everyone generally interested in learning new things, (1) is a reasonable description. Speaking from personal experience, I rose to about 80% level of whatever group I was in, which is really really really good when you're around some incredibly brilliant people, but it took time in that environment and once away from it, it hasn't terribly improved. That said, looking back on how easy it is to comprehend new math after a master's in engineering, I think there's a certain amount of repetition and reinforcement that good pedagogy can really help with in people who aren't actively resistant.
I make this argument all the time, occasionally with people in the public education system. Weirdly enough, they admit most of this stuff, but their primary defense of education as a system is that for apparently a large percentage of these kids the 8-10 hours of stability they get at school is literally the least dangerous and potentially healthiest time in their day. That all the education is just a smokescreen to get kids out of largely bad situations and, hopefully, allow for intervention in the case of the absolute worst examples.
My guess is that the reason our system exists is some percentage this, some percent the benefits of free childcare for workers.
If this is the case, we should replace the teachers with much lower-paid babysitters.
You'd let someone without a college degree babysit the children? :O
There was a comment in a recent thread along the lines of "Modern public schools teach a wide variety of topics so that kids can figure out what they want to do later in life. The things they want to learn about they practice and get better at, everything else they forget." That rings pretty true for me, where I can remember some math, some grammar, etc., but mostly I remember the things that I took further in college or had a natural interest in.
It's an expensive sorting system, where we value the ability of every (at least a strong most) student to get a feel for the options available and select between them. Not everyone is very good at those things, but they have the opportunity to look at them for both occupational and hobby interests. By the time they're 18 or so, they have been mostly sorted and the sorting seems to be pretty good.
If you look at K-12 education in that lens, it seems to do a pretty good job. If this hypothesis is true, then both #1 and #2 from your list can be simultaneously true, but maybe irrelevant to the goals of our education system.
I think the sorting is actually quite bad. You still have lots of people who go to law school, for instance, because they're not sure what else to do. They then find they hate being a lawyer. Most of the things you do in school have little to do with real jobs. There are also tons of kids who do know what they're interested in but have to spend time taking lots of other subjects they don't care for. While I think math is cool, I think 3-4 years of it in high school for people who don't care for it is ridiculous as a sorting mechanism. The whole system rejects the idea that kids should be able to do what they're interested in.
I don't really disagree. The primary issue is that schools rarely coordinate with employers, so they don't align expectations. They prepare EVERY student to be able to do just about any introductory field, whether that makes sense for the person or not. I'm seeing a shift in emphasis towards technical fields, which I think is positive, but it's not really enough.
Current public schools tend to do a good job of helping students realize what they enjoy doing, but a poor job of aligning that desire with actual jobs and in demand fields.
What I know of current research, there is very little parents can do _on purpose_ to significantly influence their kids. I think at most is stuff like "basic interpersonal skills", which is non-inherited and teachable. Otherwise it's genetic, home environment and peers, which yes, all depend on parents, but aren't really easy to change at will. You are who you are, and you kids will learn from you.
So what about László Polgár? I think these are extreme cases where you have much higher than normal competence and expended effort, and which probably can deliver results. But the fact that they exist doesn't mean there aren't sharp decreasing marginal benefits on effort.
So I don't think it's ok to generalize from such cases, (and I think there's an inherent bias that will tend to make parents treat their kids as special anyways). Most likely correct answer is 1, for the current state of educational science. Might it change in 50 years? Hopefully, but I doubt it's around the corner.
I know there's good evidence for the limited influence of parents on IQ and personality. But is there similar evidence on say, the ability to play and instrument, ride a bike or speak a language? These may sound trivial, but are much more relevant to the discussion.
I chose extreme cases because if they didn't exist then it's almost impossible to reason about 2. If everyone got average results then it's impossible to draw any conclusions about the state of our teaching. If some get spectacular results it's worth asking why.
I'd go for 2. Also note that most of the people being taught are heavily resisting it. Simply selecting for lack of resistance should already have a major effect (cf. Mike H's example of Khan Academy). As a university level teaching assistant, my teaching approach always emphasizes that learning is an activity on part of the student. I believe the same would hold at other stages of education, except that the students are often not ready to actually take responsibility for their own learning. Therefore, effective learning requires a certain amount of intrinsic motivation on part of the student. Schools offer plenty of extrinsic motivation to compensate, but that risks getting Goodharted. (in fact, in plainly does, see "teaching to the test") So, I think it is questionable how much improvement would be gained by starting the learning path at an earlier age if this motivation has not yet developed. In general, it follows that it is more effective to let young children "play around" with advanced toys ("hey kid, check out this Commodore 64"), rather than putting them on a lesson plan ("Ok, let's start with lesson one of "learning to program with the Commodore 64" ").
My own example would be how I learned to read English (I'm not a native speaker). Basically, I learned it by playing Pokemon Gold. I sometimes daydream about "Pokemon Ruby: educational edition". This hypothetical game would not force learning, but simply increase learning opportunity by adding an inbuilt dictionary, gradually increasing the difficulty of the language as the game progresses, and perhaps add some direct rewards for mini-games based on language (something in the flavor of the trick house from Ruby would probably work). Another example would be the "Donald Duck" comics (or at least, the Dutch version). Since the comics are pretty easy to follow without reading the text balloons at all, the text inside does not shy away from more advanced language, and uses a large amount of figures of speech. (the frequent reader of the Open Threads may recall that the Dutch language is enriched with many beautiful figures of speech, such as "Zoals de waard is, vertrouwd hij zijn gasten" ("The host will trust his guests as much as (he is trustworthy) himself"))
In general, it seems there is a lot of untapped potential in this space of "non-forcing" educational games. Young children have a very good learning ability, but lack the discipline to focus it towards ends that are known to be productive. Let their learning river flow downhill, I say, rather than trying to direct it with channels.
This gets back to John Holt, who believed that conventional schooling destroys motivation.
I wouldn't go that far. Schooling as currently practiced often does, but I don't think any far reaching pedagogical reforms would be nessecary prevent destroying motivation. Rather, I believe the problem is that obtaining motivation for useful (in the sense that they will be useful after the schooling has finished) topics in the first place is a difficult problem to solve, and even more so en masse in the school setting. Solving this problem does require more radical action, I believe. I don't think letting the kids have some choice over more a traditional curriculum is enough, as both freedom of exploration as well as useful guidance are nessecary. This demands an amount of attention that I think is not scalable, unless we can digitize and "gamify" it effectively.
There's some evidence for (2), namely the results seen by home schoolers and the Khan Academy. But I talked to a working teacher once, a friend I grew up with. He knew all about Khan but wasn't particularly impressed. The problem with a lot of cool ideas to improve education is they end up being tried on people who were already motivated and 'special' in some way. Once you truly accept that 'teaching' is 25% imparting knowledge and 75% daycare/pacification of the child, then questions about how good we collectively are at teaching look quite different.
I think it's pretty obvious than the people mentioned are close to their potential in the given area. Do you think they are not?
Have any non-Europeans had experience traveling to the EU during COVID?
I know you can get into your first European country by showing proof of vaccination and a negative test at the airport, but what happens if you want to go to another country (eg from Portugal to Spain, or Spain to France)? Do you have to get another test? Or do they assume anyone coming in from friendly European countries is okay?
I'm a US citizen. I flew from New York to Amsterdam in July. I had to have a negative PCR test to board the plane. No one cared about my vaccination status (I am of course fully vaxxed). In Amsterdam I changed planes to go to Tanzania. There I had to take another covid test to exit Kilimanjaro Airport. It was pretty much just a rip-off. I certainly didn't get covid between New York and Tanzania on planes that had a negative covid test as an entry requirement. Anyway, it doesn't directly answer your question about one EU country to another EU country, but it is a case of an EU country to another foreign country.
I traveled to Austria a few weeks ago, and then to Spain a few days ago. I managed to get into Austria (connecting in Germany, if that matters) just with my vaccine card, but in order to get into Spain I had to get tested (which is really easy in Austria).
Remember that you also need to get tested to go back to the US, regardless of your vaccination status, and that Europeans in general are still barred from the US.
See https://reopen.europa.eu/en/
Each country has it's own requirements, which you can usually find very easily by searching for "{country} covid entry requirements" and you will find some official government website. You need to present vaccine or other proofs whenever crossing a border, and some countries require you to fill in an online declaration before arriving.
As someone going from one European country to Greece I need to show COVID test done with 48h from entering - or proof of vaccination.
Note that Greece has nasty trap - you need to fill some PLF form, but cannot do this in last 24h before entrance. Penalty fee is 500 euro.
NB: I went to Greece a few months ago and nobody was really checking the PLFs. You could wave anything that looked like a QR code at the border guards and that was sufficient. I don't know if that was a fluke or normal though.
Thanks! So I can stress a bit less about it.
There is no "Europe" w.r.t. COVID. Every country has their own constantly shifting policies that change on a day to day basis. EU doesn't control health policy, despite some attempts to do so like the vaccine purchasing programme. Given how badly they flubbed that it's unclear whether more health powers will get transferred any time soon.
As European, this depends on the country you are travelling to. I had to do a test for some countries, while for others showing proof of vaccination was sufficient. I don't know how this is for non-Europeans, but I imagine this is the same for you.
To get into Portugal, I didn't have to show a test, just vaccination, and not the EU covid passport, the CVS document was fine.
This was at the beginning of August.
For +-30 year-olds, what is the better choice for Covid vaccines (we have J&J and Pfiser available to us)?
The primary consideration would be protection against possible long Covid, but other considerations (including potential side effects, infecting older, vaccinated, family members, etc.) are also factors.
Personally, I favor the mRNA vaccines (Modern and Pfizer) but only because I'm older and it's possible I've already been exposed to the Ad26 vector of the J&J vaccine -- which could make it much less effective. I think if you're younger that would be less of a concern because you've met fewer adenoviruses.
I second Jeffrey's advice. There is new evidence that strongly backs J&J + 1 mRNA approach as better than homologous approaches.
https://science.sciencemag.org/content/372/6549/1392
I think you've mistaken what that paper is about. It's a discussion of the possible factors that underly a stronger antibody response in people who have been *infected* and then given a vaccine. It's true at the end they surmise in one sentence that a mixture of vaccines "may" have similar effects, but they offer no detailed argument for that, nor point to any actual real-world evidence of it.
I got the mRNA, but I've been hoping to get J&J. Not only because a mix would seem to be better, but also because mRNA fades more quickly.
Where have you seen this evidence of the mRNA effectiveness fading more quickly? I hadn't heard that and I'd be interested to know more.
https://twitter.com/andrewlilley_au/status/1428237524212674564
I would be cautious about reading too much into an Oxford study saying that the Oxford vaccine is fading less quickly, especially given that for all the charts, the highest confidence line of each vaccine is still outside the error bars of the other, and there are lines within the error bars of each one that have higher or lower slope than the highest confidence line for the other. It does seem suggestive, but I would want to see actual studies showing an actual period where mRNA protection is lower than adenovirus vector protection before being confident that this difference in fading is real.
Andrew noted that it was expected beforehand that would be the case, and it wasn't clear to me why that would be a priori (admittedly, I hadn't heard of either mechanism prior to COVID). It might be related to the fact that some vaccines are designed to optimize just antibodies and not T-cells:
https://www.businessinsider.com/johnson-and-johnson-covid-19-vaccine-arguably-the-best-2021-2?op=1
https://cen.acs.org/pharmaceuticals/vaccines/Adenoviral-vectors-new-COVID-19/98/i19
There is a general write-up about 3rd dose vs. vaccine efficacy (reduction) over time at https://yourlocalepidemiologist.substack.com/p/confused-about-the-3rd-dose-me-too which is a good read and has links to studies. (It doesn't substantiate "more quickly" BTW)
Are there any good resources on the prospect of genetically engineering higher IQs?
You might be interested in this summary by gwern:
https://www.gwern.net/Embryo-selection
If anyone is looking for interesting content across tech, finance, art, media feel free to check this out: https://gokhansahin.substack.com/p/curated-content-for-busy-folk-45
Also, how can I report this comment as spam?
Please stop spamming it in every thread. Also, "content across tech, finance, art, media" is hopelessly vague - no suprise that you need to resort to spamming.
I know nothing about the research an AI technically. I have no training in computer programming or mathematics so I will admit my ignorance. But I have a question. Is there an algorithm for what I will term
the will to live??
Bear with me.
Imagine your life under the most intolerable circumstances. You are in a concentration camp, you are in solitary confinement and subject to random torture and other abuse. The rational conclusion seems to me that you should just get out of it anyway you can. But human beings don’t seem to behave that way.
Would you call that they will have live?
You don't need a fancy AI to implement "the will to live". You could make a really simple robot car that would roll around on a tabletop, and avoid falling off the edge -- and you can build it out of purely analog components, no computers needed. This car arguably has as much "will to live" as a mouse (I mean, the furry kind, not the computer peripheral kind).
"Algorithm," isn't really the term you want here. The algorithm isn't the thing which is alive or dead, so an algorithm alone cannot compute, "the will to live."
As we move from narrow intelligences to more general intelligences, we tend to turn our systems into "agents." An agent has some physical substrate. In order to effectuate self-preservation, the agent needs to have a model of what physical form it exists as, and then it can make decisions which avoid its destruction. Many narrow intelligences already have agent-hood and self-preservation. Take a self-driving car: it knows its physical dimensions avoids crashing into things. You might say a self-driving car has a will to live, as we keep seeing them make decisions which avoid their destruction.
Even digitally, many of OpenAI's video-game playing AIs exhibit this. They move away from hazards and towards the end of the game. The engineers create some scoring function in order to evaluate the AI's performance. In order to get the AIs to do something rather than nothing (because in some games, not moving will never win the game but it will avoid death), time was made to negatively impact the score: not only should the AIs move towards the end of the game, but they should do so quickly. From this time pressure immediately emerged suicide: if the AI was unsure it could make a certain amount of progress without dying, it might immediately kill itself to avoid the time penalty of trying. Obviously they had to come up with more sophisticated ways to score progress from then on.
So in general, is some analog of "the will to live" present in our systems? I think we generally just call it "self-preservation" but yes, if we want our agents to survive for very long, we generally need to design in some self-preservation. It's not really an algorithm, more a quality of the agent.
Thanks for this.
There is a well understood scheme, explaining how properties similar to "will to live" are emerging in a smart enough goal directed agent.
Agents will do actions that increase the chances to fulfill their goals, score more points on their utility metrics and try to prevent the actions that decreses it. In most cases being active increases the chances of success therefore "will to live" is an instrumental goal for most agentsю Obviously there are exceptions. If whole goal of an agent is to have a cup of coffee delivered to you and someone is already bringing you some coffee, such agent wouldn't mind being shut down. Special environments can even motivate agents to commit suicide if it is somehow the best strategy to accomplish the goal.
Humans goals are complicated and multidimensional. So it's not that easy to say what will be the optimal behaviour in your example. Some people may value the tiny chance to escape and accomplish their goals in the future more than they dislike all the torture and abuse now. Some may not. Some will commit suicide, some will not.
It's also easy to understand how humans can sacrifice their lives for some greater cause, or knowing that this sacrifice will help accomplish their goals, or how people die in peace, knowing that their goals are accomplished. It seems to me that the difference between AI preservation as an instrumental goal (will to score) and human will to live is only the difference in the utility functions.
I take your point.
For the purposes of your question there are <furiously handwaving> two kinds of "AI" using the term in its modern context.
1. Pattern matching/generating AI. This is the type we actually use in the real world for useful tasks.
2. Goal-oriented agents. For example, non-player characters in a video game.
The line between them can get a bit fuzzy because often the pattern matching AI is wired up to some IT system that's making decisions (semi-)automatically. But from an AI perspective they're different.
Pattern based AI is just a pure algorithm. It has nothing that could be described as will. It outputs a guess about something, usually with a score or probability attached. No decision is made at any point and the AI cannot learn from experience whilst working, instead, all learning is done via a separate training process that outputs a new version of the AI. If you don't run the training process and replace the AI, it will keep making the same mistakes forever.
Goal oriented agents are the type of AI that DeepMind focus on. It's the type that Hollywood mean when they make movies about robots that kill their masters and try to take over the world. Agents are closely related to the video game world. Given an objective expressed as a 'score function' or 'specification function' it will learn to maximize its score using whatever strategies are available to it. Building agents turns out to be very hard because it's easy to create an scoring function that doesn't quite express what you really meant, hence this hilarious spreadsheet of AI fails where the AI learns to achieve its task in ways that the designer didn't anticipate:
https://russell-davidson.arts.mcgill.ca/e706/gaming.examples.in.AI.html
This type of AI has no specific "will to live" but it does have a "will to score" and usually to get a higher score, it must be alive. In other words such an AI will resist attempts to shut it down. This leads to the field of AI safety and all the subsequent discussions you'll see on Scott's blog about whether super smart AI will take over the world in future.
Note that the differences between "will to live" and "will to score" can be very subtle. In one famous example of specification gaming an agent being trained to play Tetris learned to pause the game indefinitely a split second before it was due to lose. As its only purpose was to play Tetris, this can be seen as a strange form of suicide.
Thank you.
Is this some kind of GPT-3 generated comment? If so, it's disappointing.
If a human wrote this, then I would say that your use of the word "algorithm" doesn't make sense in this context, and you should consider rephrasing without that word because with it, it's incoherent.
The will to live.. Siri disappoints me constantly
I started taking 400mg of SAM-e, based on the discussion in the depression post. I am seeing pretty dramatic results-- depression gone; much less anxiety; much less rigidity, ie more flexibility, more 'openness to trying new things'; also greater motivation, or, perhaps, less of a hump to get over to get myself to go do something; and, last but not least, greater happiness, and more enjoyment in life. A real life changer. Thank you.
I had the same experience about 10 years ago. The first 3 days frankly felt like I was on a low dose of MDMA. Then it leveled off, but still lifted me out of a pervasive depression. I stopped taking it after a few years. Occasionally I've tried it again, but it never seems to have anywhere near the effect of that first round.
Every time I read something like this I worry about placebo.
Do Gwern's self-double-blinding trick?
Psychiatric Intervention, or: How I Learned to Stop Worrying and Love the Placebo
It’s less sucky side effects than SSRIs and much less expensive than a therapist
Glad you're doing better. It seems to work really well for some people.
I'm curious whether anyone caught Tesla's "AI Day". I did. They showed off their neural network, explain how they convert 8 camera video feeds into a 4D vector space, then have a planning engine on top of that they decides what to do. They also showed off their new from-Silicon up "Dojo" Neural Network training computer that is optimized for the task.
They also showed a new Tesla humanoid robot, and said they plan to build a prototype next year. The idea is that it can do basic and boring human tasks. Elon casually mentioned that long term, he supports UBI.
I'm curious if this causes any updates in people's thinking about AI risk timelines, and also people's thinking about UBI in general.
I'm impressed with the things Musk has gotten done, but I also consider the humanoid robot stuff 100% grade A lies.
Coincidentally, I read a very sad news story today about a fatal crash involving a Tesla on Autopilot mode (the self-driving but not really mode).
The fault clearly is down to human error (the guy was driving on a back road at night, he was talking on his phone, he dropped the phone and bent down to look for it, is it any wonder the car crashed into another?) but equally obviously the family of the dead person are suing Tesla because that is the entity here with deep pockets.
The story is surprisingly negative, it contrasts what other car manufacturers do to what Tesla is set up in such situations (i.e. making drivers keep their damn eyes on the damn road) and has one quote from Musk a few years back that reads badly in this context.
So I think whatever about AI risk, this demonstrates yet again that the big risk is not from the AI, it's from the way humans use it.
https://www.irishtimes.com/life-and-style/motors/it-happened-so-fast-inside-a-fatal-tesla-autopilot-crash-1.4650265
"One of a growing number of fatal crashes involving Tesla cars operating on Autopilot, McGee’s case is unusual because he survived and told investigators what had happened: he got distracted and put his trust in a system that did not see and brake for a parked car in front of it. American Tesla drivers using Autopilot in other fatal crashes have often been killed, leaving investigators to piece together the details from data stored and videos recorded by the cars."
https://theconversation.com/why-the-feds-are-investigating-teslas-autopilot-and-what-that-means-for-the-future-of-self-driving-cars-166307
"It’s hard to miss the flashing lights of fire engines, ambulances and police cars ahead of you as you’re driving down the road. But in at least 11 cases in the past three and a half years, Tesla’s Autopilot advanced driver-assistance system did just that. This led to 11 accidents in which Teslas crashed into emergency vehicles or other vehicles at those scenes, resulting in 17 injuries and one death.
The National Highway Transportation Safety Administration has launched an investigation into Tesla’s Autopilot system in response to the crashes. The incidents took place between January 2018 and July 2021 in Arizona, California, Connecticut, Florida, Indiana, Massachusetts, Michigan, North Carolina and Texas. The probe covers 765,000 Tesla cars – that’s virtually every car the company has made in the last seven years. It’s also not the first time the federal government has investigated Tesla’s Autopilot."
Hmmm. Now, I'm not saying there isn't something wrong with Tesla's autopilot, only that nothing in the above post should be thought of as being equivalent to "tesla's autopilot is unsafe compared to humans". It is very unlikely that any self driving system will be 100% safe. Thus deaths will occur. It is almost certain that the type of deaths will be different to those we are used to.
Personally, I'm interested in my absolute risk of dying in a car, not the particulars of how that occurs. So, the real question is: on a per 100,000 km basis, which is safer? Me driving a Telsa manually or letting the autopilot?
By all means figures out why flashing lights freak it out, but that is kinda not the real point, is it?
The Tesla website has some pretty big numbers that sound pretty compelling. https://www.tesla.com/VehicleSafetyReport (I know autopilot engaged does not equal reading a book in the back seat)
So far using "self driving" in cars appears to be about the same risk level as driving slightly drunk: as long as everything is normal, you're OK. But if something weird and unpredictable happens, you've got a problem.
And of course, the usual argument for why we need self-driving cars is because humans occasionally drive drunk, sleepy, et cetera. If the self-driving cars can only handle themselves confidently under the conditions a drunk or sleepy human could...meh.
Boston Dynamics spent over 20 years figuring out how to make a bipedal robot move as gracefully as a human, and Tesla thinks they can do it in a year? Good luck with that.
And that's just for moving around, in a controlled industrial environment. Musk wants a robot you can tell "pick up that wrench over there and tighten that bolt," which is several steps beyond that.
I've long stopped believing (if I ever did) any of Musk's announcements. He's a modern-day P.T. Barnum who is a master of hype and what will get the public excited, delivery of same is a completely different matter.
I have to admit, SpaceX seems to be working, so that is in the plus column for him. But autonomous robots that will do the grocery shopping for you? Starting from this time next year? I'm more inclined to put my faith in "I need a new pair of shoes, I think I'll see if there are any leprechauns around to make a pair for me".
My partner recently got a new pair of shoes - we've been into minimalist shoes for years, but he tried this new company, and their shoemaking elves had great service: https://www.softstarshoes.com/about-us#meetus
To be fair, he did not promise robots that do grocery shopping, or really anything more than _maybe_ a prototype built next year.
I think Musk's successes go beyond SpaceX. Tesla is overall looking quite positive and has forced change in the auto market towards electrificiation soon than would have otherwise happened.
While I personally think Musk is a dumb idiot asshole who lucked into his fortune and mainly sponges off of actual talented people, Tesla took a bunch of technical risks that look like they payed off/will pay off big, and having him as a front man helped massively.
Yeah, after how many years? of promising totally self-driving cars, now it's fully general ai-driven human robots. This, like 90% of the announcements coming out to the company, is nonsense aimed at investors and branding.
Not for me. The more I look at UBI the more support for it looks like a kind of performative "ok socialist" peacock routine.
A UBI is basically welfare++ rebranded in a form that's sufficiently extreme and thus, distant in the future, that it appeals to utopians who don't think about it too deeply. There's nothing fancy about it. It's just what we have today, amped up to 11. It would pose the exact same problems the existing welfare system poses, but much worse. In particular, the level of government money printing in most countries combined with the unrealistically optimistic state pension projections, strongly implies they cannot actually afford today's welfare systems, let alone welfare++. Because UBI is just a new re-proposal of socialism, most of the discussions around UBI look old and tired despite the shiny new gloss afforded by giving it a three letter acronym. For example, UBI schemes are often posited to let people become better people by focusing on things that are hard to make profitable (which - left unstated - are implied to be better things). Real world UBI trials don't bear this out however, and when they fail, the explanation is always that it would have worked if only they'd UBId harder. That's the exact same explanation you hear when asking those on the hard left why communist countries failed: they just didn't do it right!
Musk likes talking about UBI because he likes robots, and mentioning UBI lets him avoid thinking about the economic dislocation caused by robots without needing to do anything near term that would cause smart people around him to rationally object, like actually campaigning for more welfare. It's hard to feel like it's more than that.
The main problem that welfare has that UBI doesn't is means-testing. Making people fill out lots of paperwork and wait a while for the system to decide if they're a Deserving Poor or not adds a lot of friction to the process and increases costs compared to simply mailing out checks.
(There's "means testing" of a sort at the other end when taxes are due and some people get taxed more than they get in UBI, but the IRS already does that for everyone so there's no increase in bureaucracy. It also prevents any sort of "welfare cliff" since a progressive tax system doesn't work that way.)
Like, I don't know what articles you've been reading, but nearly every one I've seen on UBI has included a few reasons why it's better than simply putting more money into the existing welfare system.
I've read the same articles but they don't make sense.
The idea that administration overheads are so large that they could pay for UBI crops up frequently, or occasionally in the weaker form you're proposing here, where maybe the overheads can't cover the additional costs but they're still very large. This claim is never backed up with evidence though, it's just asserted. Here's an extensive discussion that seriously undermines all such claims:
https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.835.2632&rep=rep1&type=pdf
However you don't really need a paper to see this. Consider that UBI must still have very significant administration and means testing costs. If it didn't then nothing would stop fraudsters creating fictional people and registering for UBI. Also problematic, nothing would stop tourists, people resident elsewhere and so on from registering. Thus a lot of administration work must go into establishing that a claimant is (a) real, (b) not dead, (c) probably of an eligible age (d) not in any debt especially to the state (as debt holders should be serviced first) and so on and so forth.
The case for UBI always seems very weak, as if people backing it haven't really sat down and spent time thinking about the precise details of how it'd work and how much it'd cost. Even ignoring implementation costs, the benefits are often quite wishy-washy and naive/idealistic, of the form "lots of people would become poets and artists".
>The case for UBI always seems very weak, as if people backing it haven't really sat down and spent time thinking about the precise details of how it'd work and how much it'd cost.
Plot the net of taxes & transfers which would be replaced by implementation of a UBI (i.e., means-tested welfare programs which would be shuttered) against pretax income. Perform an OLS linear regression against the data. The y-intercept is the UBI amount with approximately no net budget impact.
> i.e., means-tested welfare programs which would be shuttered
If someone spends their UBI on video games, and has no money for food, do we let them starve in the street?
Otherwise, we just regrow all the old welfare programs.
> The idea that administration overheads are so large that they could pay for UBI crops up frequently
Nobody has ever claimed that overheads would pay for UBI completely, merely that eliminating the overheads of multiple programs and the political wrangling that comes with it frees up more funding to actually help people.
> However you don't really need a paper to see this. Consider that UBI must still have very significant administration and means testing costs.
It doesn't, that administration cost is *already being paid and will not go away* because it happens with your yearly tax return. Pay everyone a UBI, at tax time they declare what they actually made, and you recoup the UBI above a certain threshold.
I feel like a lot of people really overcomplicate this topic.
You don't need heave administration and means testing to ensure that the recipient exists and is a living citizen of your country.
The problem with administration and means testing is in ensuring that someone who makes ~$10,000 a year actually has the paperwork to prove that they're not secretly making ~$200,000 a year, and that filling out this paperwork is one of the reasons so many social benefit programs fail to get to their recipients.
I'm not sure how much easier it is for a person living under a bridge in Austin to prove that they exist and are a citizen than to prove that they make less than $200,000 a year, but the claim is that it's enough easier that they're more likely to actually get the benefit.
Another problem with means testing is that those on welfare have little incentive to take a job that doesn't pay substantially more than welfare. To me, that seems like the main benefit of a UBI.
That was me!
I spent a couple years "Not Working" 'cause if I took a job, I would suddenly have to pay 15k+ for shitty health insurance with a bigass deductible, and that is with Obama's shitlib credit bullshit.
>Thus a lot of administration work must go into establishing that a claimant is (a) real, (b) not dead, (c) probably of an eligible age (d) not in any debt especially to the state (as debt holders should be serviced first) and so on and so forth.
In the UBI proposals I have seen, the individual income taxes get raised to the extent the effect would be net zero on median* tax payer. In most countries similar get done anyway by tax authority, citizenship / electoral roll records, any existing welfare benefits, school system, military draft system in the few countries that still have them, and such. If you already have system for that, adding UBI instead of means-tested welfare isn't much of change.
* Or some other percentile point, dependent on how socialist dreams one has.
The humanoid robot thing seems pretty dubious. But it is a good distraction to change the news away from Tesla getting slapped with some investigations from the Feds over the Full Self-Driving Program.
Musk likes to trot out Cool New Stuff whenever there's bad news afoot for Tesla or SpaceX. It gets reporters talking about how the Cool New Stuff is cool, or stupid - but either way they're talking about it.
This seems overly cynical. Musk likes to trot out Cool New Stuff because Musk is the kind of guy who really likes Cool New Stuff.
I don't know why people, even the sort of people who usually like this sort of thing, have to be so cynical about Musk. There's seven billion people on this stupid planet, and sometimes it seems like Elon is the only one who is actually doing anything worthwhile. In most other contexts, people seem to understand that big technological bets are hard, and that if even 10% of your big bets succeed then you're doing pretty well. But when conversation turns to Musk, who has a much better strike rate than just about anyone else while also pursuing more ambitious goals than anyone else, all you hear about is the fraction of his projects which failed or else weren't delivered on time.
> sometimes it seems like Elon is the only one who is actually doing anything worthwhile
This sounds like Musks' reality distortion field hiding everyone else who is working on these same problems in a less flashy, and often equally or more effective, way.
Well his two main things, long-range electric vehicles and reusable rockets, do seem to be the best in-class, and he has the benefit from saying he was working on them before anybody else. It seems like whether you think people are drinking the kool-aid depends on how much you think people are basing their perceptions on his many side projects.
I understand why people, probably the very people who work at Waymo or some other brain-interface company, get mad about being overshadowed by Musk's flashiness, but I don't think it's a legitimate gripe and I observe many of them take their jealousy too far. Getting the public excited about science and technology is important, and if it isn't being valued appropriately, it's not enough. Most companies don't appear to want Musk-level publicity, and that's fine.
I don't know much about the brain-interface stuff. My biggest interest that he talks about is transportation technology, and with both Hyperloop and Boring, his main goal seems to have been attracting attention away from established alternatives in order to preserve the market for automobiles rather than high-capacity transit. It's not like he's the first person to propose pneumatic or vacuum tube transport (these ideas go back at least to the 1950s), and his main idea for making tunnels cheaper is to make them too small to carry many people.
It's good to get people excited about new things, but not by telling them that actually workable solutions are boring so that you can hold out the promise of exciting transit that will actually just require continuing to buy cars from his car company.
Ok but do you actually think he's meaningfully detracting support from projects like high speed rail? Was Vegas ever really going to build an efficient transit system under their convention center? It seems to me that, being Vegas, the project was always intended to be an attraction. And even still, the tunnel was completed on time and for a very reasonable price. If anyone wants a tunnel that size, there is now a proven, inexpensive option. That seems like a minor win for American civil engineering to me.
It seems to me like you have an unnecessary zero-sum attitude about this. The idea that Musk is "preserving the market for automobiles," as if that's something that he even feels a need to do, seems a little excessively paranoid. He's making a company that digs tunnels. The Boring Company is an independent for-profit entity from Tesla, they will surely dig a tunnel for whomever is willing to pay them to dig a tunnel.
The company's next project (allegedly https://www.bloomberg.com/news/articles/2021-06-18/elon-musk-s-boring-co-pitches-double-wide-tunnels) is to dig tunnels that are 21 ft in diameter, and to dig two of them side-by-side. Are you convinced that that, too, is some Trojan Horse against public transit?
I don't trust Elon Musk as far as I can throw him, at least on non-rocket topics.
I doubt there's a functioning humanoid robot in Tesla's near-term plans. If there is, I am fairly sure it's a bad thing. In that sense, I can only assume that Mr. Musk is deliberately cultivating a minor risk to head off a major one.
Well, if he can develop the functional humanoid robots, then he can get them on the assembly lines making Teslas, and then maybe the delivery backlog can be cleared:
https://www.cnbc.com/2021/08/18/months-long-delivery-delays-confound-would-be-tesla-owners.html
I think it’s interesting that the penultimate display of AGI is a robot that can tap dance.
It was a human in a spandex suit FYI
Proof of concept I suppose
I have been looking into grad school for mathematics, in particular with the goal of eventually doing research. It looks like the job market for research positions is incredibly tight right now, but a large part of me wants to just study mathematics anyways. There are countless articles talking about how job prospects for pure math PhDs are actually okay, but most of the time they bring up examples of big data and marketing analytics, which I have something of an ethical opposition to. So I figured I would ask: What sorts of jobs can you get with pure math, other than data analytics?
FWIW, my team lead at Google had a PhD in pure math, and if I recall correctly he got hired right after completing it. I don't know if he learned to code before or after it though.
That was what I did (probably not on your team though). It was a bad choice - you should start out somewhere where the math gives you at least some advantage (not that you'll be doing abstract algebra anywhere non-encryption, but there's lots of places where good intuition is useful). Google is bad both because it has no room for that (99% of the work is writing protobuf pipelines), and because it's a bad learning environment (the typical response to "I don't understand X" is "read the code/git good")
I took a pure math PhD (mathematical physics) but ended up as a health policy analyst. Nothing compares with pure math with respect to engaging the mind, but I do feel like I often work on hard problems across a wide range of areas, and generally feel like I'm contributing to the greater good. I do work with big data, but also design complex randomized trials and have opportunities to learn and apply economics, psychometrics and medicine. I work with doctors, scientists, economists, and psychologists, almost all of whom like to learn new things that will help people. I also earn multiples more as a consultant than as an acadmic, and have been paid to travel to Europe, Asia and Africa just to talk to people about what I do. So while I miss mathematics quite a bit, I feel pretty good about where I landed.
I second most of what previous commenters said, and would like to add/reframe just a few ideas:
1. When I started my math phd, I was quite sure that I wanted a career in pure math research, but I underestimated how difficult it would be to keep a long-term interest into questions that are so theoretical (I studied algebraic geometry/mathematical physics). The math community can be a bit inward-looking, and it turns out that I'm much more interested in playing around with up-and-coming technological tools; I'm more tinkerer than thinker. Still, I think that pure math phds are still worth it, for those who are genuinely interested in the subject -- on the off-hand chance that your projects work out and you maintain your interest, math research is one of the most intellectually rewarding careers.
2. Regarding job prospects after the phd, I used to believe in a dichotomy between pure research and selling your soul to wall street or facebook. Luckily, the spectrum is much more continuous. Like many others these days, I ended up in bioinformatics research, which I really enjoy. To me, one particularly appealing feature of this field is the existence of long-term academic positions other than such-and-such professor: bioinformaticians and research scientists who get to do the fun tinkering without worrying so much about people management or grant applications. For those who like the culture and intellectual freedoms of academia, but want less involvement in the associated management and politics, this is an option worth exploring.
3. Related to 2, there are ways to hedge during your math phd, by working on pure areas which are close enough to applications. Others already mentioned cryptography and formal verification. Mathematicians more interested in geometry/topology could check out topological data analysis, or the recent applications of gauge theory to obtain convolutional neural networks which are equivariant w.r.t rotations or other symmetries.
4. Anecdotally, I agree with Elena that a math phd has a lot more signalling value than a math master's. In practical terms, some research scientist jobs in both academia and industry are gated behind a phd requirement. In industry these kind of jobs are a minority, but they're likely the ones where you get to do the most interesting work.
I may have a somewhat unique perspective relative to the commentariat here, in that I enjoyed the process of getting my PhD (in pure math). I left to industry halfway through my second postdoc, partly because I didn't have great postdocs (and didn't have enough of my own research program to thrive despite that), and partly because I wanted to settle down in a particular location and would rather do industry things than teach. In pure math (possibly unlike any experimental science ever), I found the graduate school experience to be "not a lot of money in exchange for not a lot of work," and would have considered stretching it out longer if a scholarship were available. (I'm from the US, but for Reasons did my PhD in England, so the standard US option of "you will teach some calculus classes in exchange for tuition + stipend" was not available.)
Back In My Day (PhD 2012), the exit route was finance first, then data science. This may have flipped. Also, data science isn't quite equivalent to marketing, e.g. I worked for a logistics company, and some -- though certainly not all -- of what the group did actually involved doing the logistics better. I've also seen manufacturing companies where the data science group is involved in process control.
I currently work in an engineering outfit (my title says Something Something Engineer). I think this is not terribly uncommon, although many of the opportunities will be tied to the Department of Defense. Of course, there is also the NSA, and people I know who worked there have said good things about the work environment.
A couple people recently mentioned quantum computing to me as their (actual or potential) exit strategy.
Google (I think in the form of straight-up software development) was an exit route for a few people I know, with varying success.
Bell Labs used to be an good exit opportunity if you could wheedle your way in, although I gather it's less of a "math department minus the teaching" now than it was in its heyday. There should be at least a few similar industrial research groups out there (e.g. Google, Amazon, IBM).
I will reinforce everyone who mentioned programming experience being very valuable outside of academia. I think you do get a salary premium for having a math PhD, but in most places it's essentially a monetary equivalent of the "wow you have a math phd you must be so smart".
This is good, and mostly matches my experience. I will advise against Google specifically - they're not good at accommodating people with academic but not direct work experience, and it can be a pretty awful environment unless you have an unusually good manager.
I did a math PhD and now do bioinformatics. It's not a super common route but there are a lot of people with pure-math backgrounds that have ended up doing well in this field and the job market is considerably better than in math. (That's a low bar to pass though!) The other advantages of bioinformatics are that it's still involves math and that it's very broad so you can do interesting, different things every day (eg: attend a biology talk, do software development, and read a stats paper) and it's still possible to do academia if that's your thing. The downsides are it's just nothing like doing math research and a lot of what we do is incredibly basic, like cookie-cutter stats 101 and making a few plots. Also you'll be frustrated at the number of downright incorrect papers that get published in this field, with just absurdly bad statistics that somehow get past reviewers unscathed. Maybe it's just my niche, but it seems like all the statistical methods papers out there mess up one way or another but no one cares.
Echoing Luke G, make sure you really know some software development, and not just at the "I took an intro to Java course eight years ago" level. It'll make your life easier in nearly any job you could end up in later that could use your PhD. If you don't like computers, don't do bioinformatics.
I'll add again the standard advice that a lot of people don't end up enjoying their PhD and if it's not really going to directly help your career, you should consider not doing it. I had a point where I did not enjoy my career, where I struggled to transition from classes to research work and I might have dropped out then. Luckily my advisor put me on a tractable problem and I made some early progress which was encouraging and got my back on track to liking what I was doing. Many other students are not so fortunate or pick projects that are simply too difficult. Many of those drop out. Many also don't even make it to that point. Especially if you're thinking of non-math jobs afterwards, pick a research topic that won't have you stuck for years, even if it's less sexy - no one will know/understand what you did anyway afterwards. You can work on that millennium problem in your spare time, but make sure you're making progress on something else too.
Math PhD programs, at least at fairly good programs, won't help you get non-math jobs, generally speaking. Expect them to be basically blind to the entire concept despite half their graduates going that route. Be proactive, since no one will be active for you.
I studied pure math, too--not a PhD, though I do know several PhDs as well. It can be great for career opportunities, *but* you really need to know computer programming, too. Most companies trying to fill quant-y roles are looking for general problem-solving ability rather than domain knowledge--if you have a solid base in math and CS, you can pick up most of the domain knowledge quickly. Also, everyone knows math majors are the smartest ;)
The most lucrative careers for math PhDs are usually in software and finance. The hot fields in software change every few years (AI, machine learning, big data, etc.), but a good math and CS background will make you well-equipped for any of them. In finance, there's a spectrum of different quant roles who love to recruit PhDs, and many traders and structurers also will recruit quanty people.
There's a pretty good diversity of roles available, and so you'll have some ability to make tradeoffs in work-life balance, amount of CS required, and the type of work--although it may take some amount of exploration before you really figure out what's the right balance for you. (I definitely had to revise a lot of beliefs about myself after a couple years on the job!)
A few assorted points to consider:
* Although a PhD is a unique opportunity to do research, it comes with a large opportunity cost and I know several people who seem to regret it--the research doesn't go as well as hoped, the social experience isn't as fun as undergrad, and you'll be a few years behind in your career. So make sure you're confident about loving research!
* Definitely take a few CS classes to maximize your opportunities for industry jobs. Even better, get involved in a software project.
* Consider taking some econ classes, too. Chances are it'll be embarrassingly easy for you, but the knowledge is very useful!
* I'd encourage you to keep an open mind when considering careers, including your assumptions of the ethics involved--a lot of media/social media portrayal is extremely misleading, and a lot of people in academia have misconceptions about industry.
* A summer internship in industry is a great way to pad your resume and help you figure out whether that industry is right for you.
> Chances are it'll be embarrassingly easy for you, but the knowledge is very useful!
Re-reading this, I should say I don't mean to belittle the field of economics--I have tons of respect for it and wish I studied it more. I just meant that typical Ec 101 curriculum will be pretty easy for someone with a strong math background.
As someone far too many years into a Physics PhD: if you love the field so much that you want to spend your life doing research, and a job is a way to avoid starving to death and dying of exposure in the meantime, then academia is right for you! If you are at all concerned with standard notions of "good job prospects", you don't want to go near academia, you're looking to convert to industry.
In that case, a Masters can be a decent investment, but PhDs are basically never worth it - at best you're trading a couple of years of salary for a lifelong sense of superiority, but actual payscales don't value it higher than working a job in the field, IIRC the studies. People who could do a PhD are very smart and get paid well, but the PhD itself isn't adding enough to justify the cost in time, and health.
In the case of mathematics specifically, all the money at the moment is in the overlap of maths knowledge and programming knowledge, so you will need to learn some coding. Firms generally recognise that it's much easier to teach a genius mathematician to be a functional coder than to teach a coder graduate level maths, though, so do still major in Mathematics (or Statistics) and treat coding as a secondary skill for now - unless, of course, you discover you just love doing it :)
Unless things have changed since Back In My Day (PhD 2012), in pure math having only a masters is a sign that you were going for a PhD and dropped out. There are a few exceptions where the masters is a well-known program (e.g. Cambridge Part III) which you're doing as Undegraduate+ rather than as PhD-, but in general, a masters in pure math signals that you dropped out of your graduate program. I don't believe this to be true of adjacent disciplines, e.g. data science or computer science or statistics.
That sign is well-known within academia (for any discipline that doesn't have an established non-academic career trajectory for masters degrees), but I don't know how well-known it is outside academia.
I dunno. I have a good friend who is a physics PhD working in the microchip industry. His division is very technical and he only hires physics PhDs. FWIW.
To echo this, I have my PhD in Physics and work in optical comms; your career will eventually cap out if you don't have a PhD (albeit at a relatively high level that may be perfectly acceptable to you), and a PhD will both open doors to you that may otherwise have been closed AND make it much easier to get your first job. If you don't want to eventually run shit, a Master's is fine, and you should be very, very careful to properly weigh the reduction in quality of life you'll experience during the extra years you're in school.
You can generally get a SWE job (your PhD gets your foot in the door, but you will have to learn programming stuff on your own time and it's a hard transition). Personally I'd recommend going the quant finance route - you'd have better comparative advantage and they're better about receiving mathematicians. Not sure how your ethics feel about that though.
FWIW I feel okay about having done a math PhD and then done a career change (my first job after grad school was pretty awful, but things improved after that). It can be an interesting thing to do that leaves you reasonably well-qualified for other jobs later, but OTOH if what you really want eventually is career advancement in some specific field better to just go straight for that.
I've worked in quant finance and I've worked in big tech (Physics PhD here) and I think the good times in quant finance are already over, you're likely to make more money and have a more pleasant job in big tech.
You should really, really, REALLY like math research to do this. And even if you do, a lot of math culture is an inward facing status game. Talented people somehow seem to get into the pure math track even when their skills provide a lot more leverage in practically every other research field.
What are some good research fields that I should go into instead? I have looked into economics, computer science, electrical engineering, and law, and while all of them seem enjoyable, first I have to get into the programs without the relevant degrees. Noah Smith has written about why an econ phd is a very good investment (also much of econ seems to be written in the same "language" as the mathematical logic that I study), is this something that can be easily pivoted to? Is doing something like that a wise decision?
I have an econ PhD. Being good at math is vastly more important than being good at undergraduate economics for econ PhD programs. I suggest you read intro and intermediate textbooks in micro and macro economics if you want to do this.
Sorry. posted too soon. In computational biology you'd be bringing math skills to bear on a field that doesn't really like computers (or last least views them as annoying necessities) and isn't in general that mathematically inclined. Also, given our strange biomedical times, there's good reason to think the biomedical research funding complex is going to be comparatively richer than funding in other fields going forward.
Having studied both, I think Smith is largely right to be long the economics Ph.D, especially over math. If you have good quantitative skills, the other one I would mention is computational biology.
If you're Yitang Zhang, you can use your math PhD to get a job at Subway.
(Less jokey response: if you can get someone to pay for your PhD, I say go for it. Just realize that you might end up a high school teacher thinking about logic in your spare time. Which strikes me as being much better than an adjunct lecturer thinking about logic in your spare time.)
Stories like Yitang Zhang are what keep me awake at night.
On the other hand, though, he seems to have had exceptionally bad circumstances to end up working at Subway. Growing up under Maoist China and all of the personal feuds with his advisors certainly didn't help.
Yitang Zhang did have a job as a university lecturer before his Great Discovery, ne?
But yes. Even having actual brilliance is no guarantee of success.
Try looking at the 80,000 hours site ( they're effective altruism folks). Here's a google search I did of their site for word 'mathematics.'
https://80000hours.org/search/?cx=007036038893981741514%3Ad-war0ad7jy&cof=FORID%3A11&q=mathematics
What kind of math? Are you going for a professor-track or something else?
I'm primarily interested in mathematical logic, especially from a proof theoretical angle, in addition to the rest of the set theory/model theory that people study in logic. Ideally the goal is to be a professor, and that is what I would go for, but I just want to think about what the fallback plans are assuming things work out.
For mathematical logic, you'll want to do some further investigation of placement records of PhD programs. In the early 20th century, mathematical logic was seen as central to the discipline, but in the past 50 years or so, math departments have treated it as a backwater. Most math departments, even at research universities, have 0 specialists in logic on their faculty, and aren't considering hiring one. This is quite different from philosophy and (I believe) computer science departments, where most research universities have at least one specialist in their department, and will try to replace them if they retire.
There are definitely areas of CS where mathematical logic has a lot of overlap. Program correctness, model verification, etc. are niche but growing fields--software is becoming more complex and more relied on, so it's increasingly valuable to formally verify it works to spec. Programming language theory is also related to logic, and I imagine can lead to industry jobs working on compilers and other programming tools.
People in proof theory can get jobs in crypto. There is a pretty big overlap between some of the stuff in zero-knowledge proofs and regular proof theory. In general, proof theorists can easily find work in computer science, and how far they are willing to go from theory pretty much limits their success. Make sure you get the very best advisor possible. Your advisor will get, or fail to get, your academic position. Everything depends on lineage, as a huge amount of material can only be transmitted through in-person contact.
Computer science seems like a reasonable alternative path, in particular because automatic theorem provers seem very interesting. Good to know about cryptography, as well. How does one make sure that things like a job in crypto are still in the cards after their PhD? Do you need to do side projects, or is it mostly sufficient to just do proof theory during grad school?
The capital:talent ratio in crypto is high right now, so the standards aren't so high at the moment, anyone with skills has security.
For a logic-adjacent field, look into Formal Verification research, especially wrt bytecode wasm compilers.
Alternatively, the zero knowledge cryptography field is growing rapidly.
Note though, that ZKP employment will be ~entirely in crypto startups, some of which may come across as questionable. The more well established tech firms have virtually no interest in ZKP theory. MSR does fund it but I doubt their research efforts there are growing.
I considered a math PhD and chose not to do one ... Big Tech companies are so much more profitable.
Apart from the NSA, I'm not sure who hires math PhDs other than universities. I'm sure somebody does, but it probably isn't who you want to be hired by.
Big tech hires all the other maths PhDs.
It's not the optimal way into big tech. On the other hand, the average PhD in big tech is probably working on more interesting problems than the average non-PhD so the hit to total lifetime earnings may be worth it.
Maybe it's naive, but I've been pretty surprised by how openly pro-war and pro-occupation much of the American elite class has revealed itself to be in the last week- especially our media elites. While I am normally not particularly enthused about the Greenwald/Taibbi left, I have to say that they do have some excellent points regarding how pro-national security state institutions like the NYT, WaPo, CNN etc. are. They are just openly clamoring for us to stay in Afghanistan, and against withdrawal. I guess I kinda vaguely knew this to be true, but their open pro-war cheerleading in the last week has been a bit astonishing. Makes me wonder how the inevitable war with China over Taiwan will go.
I would like to gently poke at people on the right who are convinced that the media establishment is uniformly leftist. When I was growing up, in a Nation-reading highly left-wing household, a huge part of the Left for us was being anti-war, any war, all wars. (I'm not saying that those are my views now- for example I was taught that the original Gulf War was Bad, whereas I now think it was justified- but I think being anti-war is like a core leftist value). The more complex/nuanced view is that the media is a series of elite institutions with its own worldview, which is mostly quite socially liberal, but not uniformly. I mean, they were in a constant state of hysteria about Trump for 5 years because it was great for ratings, which I know a lot of the Right confused with actually being anti-Trump. They loved the guy! You see all these stats about how much cable news viewership has declined in his absence, etc.
I see this almost uniform media elite consensus that leaving Afghanistan is Bad to be a pretty bad thing for Biden's approval rating/the midterms
I read the New York Times pretty regularly (usually about five or six articles before coffee, and various others throughout the day) and I don't have the impression that they are "clamoring for us to stay in Afghanistan". I see a lot of big headline articles talking about ongoing problems with the evacuation, but they don't seem like the ones I remember from 18 or 20 years ago that were actually pro-war. They just seem to be following the journalistic convention of complaining about whatever is going on that seems to be bad, without any clear sense of what alternative they would favor.
Yes, there is a lot of that in the media, isn't there?
But would you favor the media recommending fully vetted policies beyond the standard opinion pieces (think "The Economist")? I think this actually might turn people off when the realize the newspaper has a desired policy outcome.
It's an interesting question. Some of the options for journals I see:
• Only publish the facts, don't make any value judgment at all.
• Advocate for a specific desired policy.
• Don't have a single political line, but publish opinion pieces arguing for specific policies from various people with different opinions.
All of these are alternatives to "criticize the current policy but don't advocate for an alternative".
I'm definitely ambivalent about what I would prefer! But my main point was just to contest the claim that the NYTimes was advocating more war here.
In this case, I don't think the media is acting as an elite institution, I think they're just following a very easy incentive curve.
The footage coming out of Afghanistan looks bad. We've got a hurried and seemingly ill-planned evacuation going on and a bevy of "this person/organization will be hurt by the Taliban's rule" stories that are almost trivial to report. These stories are easy to write and will therefor get engagement. I don't think this represents some kind of stalwart leftist consensus that war is good. I think this is just the immediate response to the negative images/stories coming out of Afghanistan that will fad quickly with time.
I predict with high confidence ($100 bet to charity if anyone wants to take me up on it) that by the time the midterms roll around it will be seen as a point of favor on the left that Biden got us out of Afghanistan.
They’re following the curve because they also personally don’t want to pull out. They could instead show the horrible images and say “look how bad it was the entire time! Thank god this is the last of it! We should’ve been out years ago!”
I agree, this is not about anything more substantive than seeing images which evoke an (extremely temporary) visceral reaction. The public did not care and was not interested in what went on in Afghanistan the previous 20 years, and as soon as the pull-out is over, they will revert to not caring. Right now they just feel bad because they're seeing emotional footage.
By the same token, if the media regularly broadcast footage of the real, brutal, grotesque reality of all kinds of common-place, ordinary things that go on every day, people would be up in arms (whether that be graphic footage of what goes on with people dying of Covid in the ICU or what goes on to get your food to your table or any number of other things we really prefer not to see or think about).
Just to pin down terms (to make sure we actually disagree) my prediction would be:
A majority of Americans will be defined as "more than 50%, with the 50% mark not within the error bars, in a poll done by a reputable company"
I think it's plausible (45%) that a majority of Americans will support the decision to leave Afghanistan by the time midterms come.
I am extremely doubtful (80%) that a majority of Americans will feel positively about Biden's execution of our removal from Afghanistan.
Hmm, my predication was focused specifically on the how opinion among the American left. I'm not confident making any prediction about how public polling will shape out.
I would be fine with the any of the following terms:
A majority of registered Democratic Voters (50%+) will support to decision to leave Afghanistan by the time midterms come.
No action to censure Biden over his Afghanistan pullout or reverse said pullout takes place in congress that receives significant support from the Democrat Party. (Significant here described as support from Democrats sufficient to render said action successful.)
No Significant portion of the 2022 Democrat Candidates for election will support action to either censure Biden for his Afghanistan pullout or reverse said pullout. Significant here described again as constituting a majority when considered with Republicans who hold similiar beliefs.
You can make it coherent by re-casting it like this: the left is anti-war when the wars are against utopian leftist regimes that are attempting to remake society. They are pro-war when they perceive the enemy as a conservative regime that's attempting to hold society back.
OP literally gave the Gulf War as an example of a war that his left-wing family opposed. I don't think Saddam Hussein's invasion of Kuwait could be considered a utopian attempt to remake society.
I would associate nation building with a progressive world view, akin to domestic social engineering, so I don't think there's anything surprising about left-wing media being in favour of it, especially considering the regime that is replacing the occupation government. But there are – of course – many ways to reduce the political landscape to a single dimension, and we should just do PCA instead of being all subjective about it.
> I would like to gently poke at people on the right who are convinced that the media establishment is uniformly leftist.
A lot of these debates are affected by that there are several different axes of political views, and people variously use the labels 'left' and 'right' to refer to one or another of them. The sides on the various axes form loose coalitions, but these coalitions shift.
IMO much of the time rightists complain about these or those "establishment" institutions being dominated by the left, the more accurate claim is that they are dominated by the left on matters of race, sex/gender, sexual orientation, gender identity and—to a less complete extent—immigration, religion, and party politics. But not necessarily at all on other matters, such as economy or militarism/pacifism.
Mead's four schools of US foreign policy (https://ricochet.com/311633/archives/the-four-themes-of-us-foreign-policy/) tend to place US foreign policy on what could be a two-axis diagram, with one axis being more-or-less left-right and the other more-or-less globalist-localist.
I think the US political establishment is globalist more than localist on both sides, and so it makes sense that the establishment media would lean towards the left-globalist corner (Wilsonian, in Mead's characterization). The anti-war left is solidly in the left-localist corner (Jeffersonian, in Mead's characterization). You can look at any war we've been involved in and see how people from different points on the diagram influence the resulting policy.
The first Gulf War primarily was a right-globalist effort (Hamiltonian), but there were things to support for the left-globalists (Wilsonian) in the legitimacy of the UN and to the right-localists (Jacksonian) in demonstrating US power to deter other wars; it's the left-localists (Jeffersonian) that most opposed the war. While there was agreement from three of the sides regarding the war, that agreement didn't stretch to what to do when the war ended, which is why the response afterwards sowed the seeds of future conflict.
But I'm more anti-war cosmopolitan, which is globalist I guess. (I kinda wanted them to leave some forces in Kabul, but focused on defense, not offense, to maintain a pro-Western zone where women can be educated and stuff).
Two axes isn't enough. The other popular "second axis" is statist-libertarian...which again poorly captures my way of thinking.
The axes are limited by being defined by historical people that have either been President or have been close enough to the presidency to define US foreign policy. They are useful primarily in their predictive capacity, which mostly lies in defining a second axis able to describe US foreign policy beyond right-left (for example, while Ron Paul and John McCain were both on the right, they both had very different foreign policy preferences).
That being said, it's a spectrum of general policies, not a straight-jacket. There's plenty of space between 'the US should always stay out of other countries business' and 'the US should always act as policeman at the behest of the global community'. People also holding both of those positions near the extremes (and anywhere else on the spectrum) can rightly view themselves as 'anti-war'; at the same time, neither extreme is necessarily strictly pacifistic.
I agree - there are at least three axes of politics in the Anglosphere (socialism/laissez-faire, authoritarian/libertarian, "woke"/traditional) and that doesn't even cover international relations very well.
technological progressive/regressive, pro/anti militarism/violence are two others. There’s a lot of them! There are many relations between them (and often not just tendencies towards one direction but more complex ones)
100% this. I do wonder in this specific case if the warhawk stuff became left-wing because Trump pushed the Republican position to isolationism, and hell will freeze over before the 'Left' and 'Right' are allowed to agree on anything, never mind if this means one side has to take a position completely at odds with everything they've been saying up till yesterday
Could you link one of the NYT articles (ideally not from their Op-Ed section)? I specify not Op-Ed, because my impression (from the WSJ op-eds) that those people are much less so part of the "media elite."
Also, outside of the media, which "elite class" are you referring to? My impression (once again from WSJ's reporting, so could be biased) is that most senators/representatives are condemning to dubious of the execution of the extraction, but largely not too harsh on the decision to pull out.
To a certain extent, it is Yarvin's "cathedral" that is advocating for a permanent War in Afghanistan. The deep state and the true believers in American power want to continue to project that power; if the US had abandoned Ashraf Ghani but held onto Bagram AFB the Taliban would have been unable to move against it.
This is somewhat ironic, since Yarvin wants to replace the cathedral with a new entity which could do a permanent War in Afghanistan, only effectively (with Battle Royale-style explosive collars or something).
Do you think that the US withdrawal from Afghanistan is likely to increase or decrease the total amount of war in the world?
Well the $50,000 question is whether the Afghanistan charley-foxtrot makes the Chicoms finally decide to settle things with Taiwan. Perhaps not right away, as even Joe Biden couldn't sit still for that, but after a Decent Interval. That would be an exceedingly dangerous event.
Increase. In the short term, obviously the US involvement in Afghanistan is over, so it's a temporary decrease. In the long term, Taliban is likely going to start new wars. At the same time, the utterly botched US withdrawal from Afghanistan will turn the public more hawkish; I doubt we'll withdraw from anywhere else anytime soon.
Whom do you believe the Taliban will invade? Turkmenistan seems the only vaguely-plausible candidate - the PRC, Iran and Pakistan are way, way out of their weight class, and Uzbekistan/Tajikistan are in a mutual defence treaty with Russia.
I agree with Melvin, below: it's not so much that Taliban will officially invade anyone, but rather that they'll conduct and/or aid and abet a continuous stream of terrorism on multiple targets.
Just some back-of-the-napkin numbers here, but the Taliban would have to do something like one and a half 9/11-size attacks a year to match the average yearly death toll of the war in Afghanistan.
OTOH, empirically it takes about one 9/11-size attack in 20 years to create the average yearly death toll of the war in Afghanistan. I don't see why we should expect future attacks to be far less deadly.
Is Pakistan way out of their weight class? A good deal of the population and government are sympathetic to radical Islam. Also, Pakistan has its own Taliban, which may join forces with the Afghan Taliban to try to capture the Pakistani government.
These don't seem like war scenarios so much as Pakistan's own democratic process (or lack thereof, in the case of either a pro-Taliban or anti-Taliban coup by the Pakistani military).
"Invasion", to me, means "one state militarily attacking another with the objective of territorial control". Pakistan's military is of the same scale as Iran's (slightly ahead in some ways, like nuclear weapons); its monopoly on force is not seriously threatened by any Afghani action, even taking into account possible domestic collaborators.
I don't believe the Taliban can or will actually invade anywhere.
But I do believe that a Taliban-run Afghanistan is likely to become a safe haven slash training ground for violent Islamist movements that will start carrying out violent acts in... well, just about every other country on Earth with any sort of nonzero Muslim population. That's what happened last time the Taliban controlled Afghanistan (remember why we invaded in the first place?) and also what happened in the few years that ISIS managed to control a chunk of territory.
Has the been any serious pushback on SlimeMoldTimeMold's _A Chemical Hunger_ series? https://slimemoldtimemold.com/2021/07/07/a-chemical-hunger-part-i-mysteries/
To me the bones (not the proposed culprits) seem obviously right - it's got that nice feature of correct theories that it explains a whole bunch of niggling weirdnesses in a satisfying way. That said, I've been burned in this idea-space before, so if there's good pushback somewhere I'd like to read it.
It seems like if this theory were true you should be able to find some really sharp discontinuities between adjacent areas that source their water very differently. In particular well (especially deep aquafer wells) vs. local rainfall vs. distant surface sourced water etc.
For example in the SF bay area there are several different water districts. There are areas where just going across the street gets you from water drawn basically directly via pipes from catchment basins in the Sierra Nevada's to areas served by the state water project where the water went through hundreds of miles of rivers and canals and resvoirs to areas served by local wells all with very little mixing or pooling of the water. We should in theory see pretty dramatic differences by water district boundaries in that sort of situation and I don't know that we do (if we do that would be great evidence). Southern California has some similar comparisons.
Excellent argument.
I thought it was fascinating, but he seemed to dismiss most of the diet and exercise aspects with arguments like "subjects only lost 5 pounds in 6 months." It seems like if you make a couple of those changes across a fraction of the population for the long term, that adds up to a meaningful share of the change.
I think it's actually fairly well known that diets don't work the vast majority of the time (90+% of people gaining back the weight within 5 years).
Sure, but is that because they actually aren't effective or because people stop following them? I've only done some casual reading on the topic, so I really don't know.
This question is difficult because it feels like those should be different things, but in practice the distinction isn't so clear.
Imagine a drug came out that would reliably cure/prevent cancer if taken regularly, but studies show that 90% of people - even terminal patients - stop taking it in the short to medium term due to intolerable side effects, which get progressively worse over time. Is it accurate to say this drug cures cancer? In a technical sense, sure, but in practice that phrasing seems misleading - certainly we wouldn't be confused about why people keep dying of cancer when all they have to do is take a pill to prevent it.
The case with dieting is actually worse than that, because some appreciable percentage of people will gain back the weight they lose even if they do stick to the diet. If you're trying to maintain a weight well below your set point you'll have to get stricter over time as your body gets more aggressive about compensating for the perceived famine. And your "eat, fool" circuits are waaaaay down deep in your brain - they can smear your prefrontal cortex against the wall if they have to.
Put differently, a diet that 90+% of people don't stick to it has demonstrated itself to be incompatible with human physiology in a way no less serious than if it was ineffective due to not successfully interacting with the intended receptor. It looks different because will-power can temporarily override the physiology in the case of diets, but ultimately will-power will bow to its masters in the brain stem - you can't hold your breath until you die of hypoxia, and you can't diet yourself to well below your set point longterm.
There was a lot of justified pushback on r/SSC and DSL, correctly so. Lots of correlations and claimed associations and little real evidence. Maps that look similar, bunches of n=4 studies smashed together and taken at word, and above all lots of correlations where everything is correlated already, causal or not, etc. that said it’s still a lot of effort and interesting stuff put together
This article argues that the part about watersheds is wrong: https://nothinginthewater.substack.com/p/contra-smtm-on-obesity-part-2-watersheds. The argument: In the US, the watershed map is the same as the map of African Americans (mouth of river basin -> larger plantations -> lots of slaves), and race could correlate with obesity for any other reason. Similar argument that it's confounded by socioeconomic status. They further argue that the map of river basins in China does not support SMTM's conclsuion at all (Xinxiang is very obese for racial/social/economic reasons, and the rest of the obesity map doesn't actually line up with river basins well). The author promised to write a followup post against SMTM's altitude argument.
(Personally, as usual with this subject, I can't tell which way is up.)
I definitely found the China map unconvincing, but the US map had my jaw on the floor and is a major point in SMTM's favor, so this pushback is excellent!
That said, I think CT is overselling the degree to which race can explain that obesity map. The three states at the mouth of the Mississippi, sure, but YOU CAN TRACE THE F***** MISSOURI, MISSISSIPPI, OHIO, AND TENNESSEE RIVERS ON THE F***** COUNTY LEVEL MAP (I actually deduced the existence of the Tennessee river from that map without knowing it was there). It's one of the most stunning pieces of evidence for anything I've ever seen, and it doesn't match the race data nearly as well as it matches the river basins.
I noticed that their state-level plot is from 2018 and the CDC now has the 2019 up. They look somewhat different and frankly they don't match the watersheds all that well. WV, Alabama, SC, Indiana, Michigan - are these large watershed states? Some are part of a large watershed, but at the top not the bottom so inconsistent with their theory.
https://www.cdc.gov/obesity/data/prevalence-maps.html
How annoying.
They said West Virginia is a distinct case, and it's quite reasonable to have a couple cases that are separate from their main thesis. I haven't looked at the others to see whether it lines up (northern Alabama would be in the Tennessee River watershed for instance, while southern Alabama would only have shorter and smaller rivers).
You can also trace race on the country level map - the blacks settled where the farms were, as slaves, on the ... rivers.
Meh - you can watch the Missouri cross country Dakotas. I don't think that's attributable to enslaved blacks.
Wow. "Cross *the* Dakotas".
I only know this from the meme image about how nutrient settlements caused differential racial makeup of where the nutrient deposits were!
Population density would be a confounding issue there, wouldn't it? Those rivers have more towns along them than the surrounding countryside.
On the other hand, that county-level map (which I missed on my first reading) really is quite striking. I would like to see more than two other countries, though. Thinking about countries with significant population at both high and low altitude, what about France, Germany, Russia, Canada, Chile, Peru, Mexico?
Actually I just looked up obesity maps of Mexico and Peru and they do seem to support the trend too, with low-lying areas consistently fatter than mountainous areas. I do wonder whether it's as simple as the fact that walking up and down hills all day burns a lot of calories, though.
If I understand the argument correctly, it's well known that lower elevations are fatter than higher, but it's generally attributed to hypoxia, while SMTM is arguing that it's a side effect of lower elevations being at the foot of watersheds.
The Rio Grande and the Colorado river of the west are both heavily evaporating rivers and get lower flow as you go downstream, and thus would be expected to behave differently from rivers like the Brazos and the Colorado river of Texas, which are heavily agricultural and get more flow as you go downstream.
I've called it the "phlogiston" theory; a decent amount of circumstantial evidence but overall relying on a theory of a particle that may not exist (and none of the specific options suggested are particularly compelling).
At a high level, it seems to be understating how modern hyper-processed foods (Oreos, Diet Coke, etc.) contribute to obesity. Of course, any explanation for animal obesity wouldn't be based on that, but ... there are lots of possible explanations regarding domesticated animals, and I am not so sure that the "trend" holds for wild animals.
It understates how hyper-processed foods contribute to obesity? On the contrary, it may explain *why* hyper-processed foods contribute to obesity - because processing contaminates the food.
Somewhat related, the Peter Attia podcast had a recent episode that I found somewhat fascinating for the discussion on the differences between lab mice and wild mice: https://peterattiamd.com/steveaustad/. Basically, typical lab mice are so wimpy and fat they can barely be considered mice at all, and I imagine this is somewhat true for domesticated animals in general.
I mean, a bunch of theoretical particle theories have panned out...
The hyperpalatable foods theory has a couple sticking points for me: for one thing I'm fat and it's definitely not because of processed food. I eat too much in general, but I've cut out the gluttony in the past, and while I can straightforwardly lose 20-30 pounds I'm still substantially overweight - for some reason my set point is 245 lbs. And testimony from the anti-diet space gives examples of people winding up in the ER with organ failure despite still being kinda chunky (that particular claim was from Reagan Chastain, who should be trusted as far as you can throw her, but still). This calls to mind Guyenet's anecdote of the obese mice starving to death while defending body fat stores higher than the average mouse has during normal life - in both cases something has gone badly wrong with the set points, and I really don't see how hyperpalatable food explains this. Throw in the lab animals getting fatter on the same food as 50 years ago and the theory is on life support.
> And testimony from the anti-diet space gives examples of people winding up in the ER with organ failure
I mean if you ate 1500 kCal of HFCS or chocolate fat free smoothie per day, or something “healthy” like an olive oil / kale diet cleanse, I could totally see that causing organ failure. I can’t see organ failure being that common if you eat 2kCal/day of a healthy balanced natural diet or even burgers fries and some lettuce or soylent or something.
(Eating those to lose weight)
Also, if you watched the Rogan debate where Guyenet crushed Taubes, the only unanswered point I remember Taubes scoring was when he hammered Guyenet on a population that had obese mothers and starving children. How a population like that comes to be has haunted me ever since - SMTM's theory explains it.
human are omnivores, that is partly why we occupy so many ecological niches.
May I offer the words of the poet Leonard Cohen
https://genius.com/Leonard-cohen-they-locked-up-a-man-poem-a-person-who-eats-meat-intro-live-at-the-isle-of-wight-lyrics
I'm looking for recommendations of little-known but truly excellent songs from the classic rock era. These would be songs that – in some alternative universe – would have been smash hits, but for the flapping of a random butterfly's wings in some remote corner of the world. Thanks!
Blackburn & Snow were a popular boy/girl early 60s folk-rock duo whose only album was not released until decades after they broke up. “Yes, Today” is my favorite of their songs - might have been a hit if it had come out when it was supposed to.
Scaruffi’s "Best Rock Albums of all Times" might be helpful:
https://www.scaruffi.com/music/best100.html
Which I originally found via https://lukemuehlhauser.com/scaruffis-rock-criticism/:
> We can start with his writings on music, since that seems to be what he is known for. He has helpfully ranked the best 100 rock albums of all time in order…
> If that’s too broad for you, he also provides his top albums year by year … every single year from 1967 to 2012. He also gives genre-specific rankings for psychedelic music, Canterbury, glam-rock, punk-rock, dream-pop, triphop, jungle … 32 genres in all. Try punching “scaruffi [band]” into Google; I defy you to find a major musician he hasn’t written a review of. These are all just part of the massive online appendix to his self-published two-volume history of rock music.
I just looked into that page... it may be a good pointer to semi-obscure albums, but he seems to belong to a particular kind of avantgarde underground contrarians. When someone suggests "Trout Mask Replica" as the greatest rock album ever, there's something wrong with them. Also, I don't have the knowledge to judge all of his best-of lists, but on the ones where I do, I can say he makes some bizarre choices. Listing Cream and the Animals as milestones of progressive rock, while skipping the Beatles and Moody Blues - no. Just, no. Charlie Watts (RIP) at number 3 of the best drummers? Pleeeeease. Klaus Schulze with "Irrlicht" for the best rock album of 1972? That was not even a rock album at all, by any stretch of the imagination. So, while checking out his recommendations doesn't hurt... - no, scratch that. Trout Mask Replica does cause pain and discomfort. Check out his recommendations, but at your own peril, and don't be frustrated if you don't like any of them.
Heh, thanks for looking into it. I'm not really familiar with rock, so I can't really judge. But the ones I did check out I didn't like much either.
Luke writes that "In general, the greatest albums are not ones that should be listened to near the beginning of one’s explorations of music." I certainly haven't reached that stage for rock then it seems
Or some artists are just not your cup of tea. I have been listening to various flavors of rock and metal for thirty years now, and some of the supposed cult albums still don't do anything for me. Also, rock music should not require a doctorate in music theory to appreciate, at least on some level (and I'm saying that as a fan of progressive rock). Rock should have melodies that touch you emotionally and haunt you, and rhythms that make you want to rock out. Some melodies and grooves work on some people and not on others... but if you can't find anything where you can at least say, "I understand why some find that enjoyable", it may be that it's just not a great rock album. (It might still be a great something-else, though.)
It eventually became popular 30 years later but Pink Moon by Nick Drake was released in 1972.
Rocky Erickson's Gremlins Have Pictures has a 70s rock vibe though it was released in 1986. I think it's a masterpiece. His live version of Heroin really captures that era.
"The classic rock era" is a bit vague, but here are bands and songs that I've come across that may fit your criteria:
- the band Love put out the excellent album "Forever Changes" in 1967 - psychedelic rock based on acoustic guitars and the occasional trumpets and strings. Standout songs IMO are "A House is not a Motel" and "Maybe the People would be the Times".
- Beggar's Opera had a minor hit with "Time Machine" - early prog, lots of mellotron, dramatic vocals... I love it.
- Starcastle sounded a lot like Yes, but maybe a bit more organized. At more than ten minutes, "Lady of the Lake" is an unlikely candidate for a smash hit, but still a great song, and quite catchy for progressive rock.
- the prog version of "Elanor Rigby" by Esperanto may be the best Beatles cover ever.
- in the blues-based corner of classic rock, Ashbury had an excellent album in "Endless Skies", but never achieved breakthrough success. "Warning!" is a good place to start.
- Bob Seger recorded an awesome cover of "Bo Diddley/ Who Do You Love", which AFAIK never got anywhere... but it rocks hard.
Let me know if any of these resonate with you.
Eleanor Rigby is one of my favorite songs. That version by Esperanto is at https://www.youtube.com/watch?v=T2V1MTGWJgA&ab_channel=sada0210
It's very creative musically, but they lost the story by stretching it out over 8 minutes.
The singer even tacked the end of verse 3 onto the beginning of verse 2, cutting out 1/3 of the lyrics. It's become an art object instead of a human story.
Kansas also did a prog cover of it (less "prog" and more classical) with the London Symphony Orchestra, on "Always Never the Same", but I would never call it the best Beatles cover ever.
Ray Charles did a good Rigby cover. So did Joan Baez. Paul McCartney also did a cover of Eleanor Rigby, on his album /Give My Regards To Broad Street/ (1984). Not my favorite, but it's followed by a new composition, "Eleanor's Dream", which is worth listening to for the Eleanor Rigby connoisseur. If I had to pick a favorite, it might be Rare Earth's R&B cover of it.
Again, these wouldn't have been hits, and are only marginally rock, but I think they're great:
- Bruce Cockburn, generally
- T-Bone Burnett's album /Trap Door/ (1982)
- Robyn Hitchcock's "My wife and my dead wife" (1980)--the lyrics fascinate me, because the story it tells is IMHO unlike anything else in literature in its intent / function / purpose
Robyn Hitchcock has quite a few great tunes - “I Watch the Cars”, “The Lizard” and “I Often Dream of Trains” are three of my favorites. “My Wife & My Dead Wife” is also excellent, and is somewhat thematically similar to Noel Coward’s play/movie “Blithe Spirit”
It's easier for me to think of songs that I think were "truly excellent" but could never be "smash hits". Most of the repertoire of the midwestern art-rock band Kansas fall into that category. They did later become highly influential on Scandinavian metal, but nobody else seems to have noticed this. Some of my favorite songs by them are "Lamplight Symphony", "Cheyenne Anthem", "Ghosts", "T.O. Witcher", "Peaceful and Warm", "Rainmaker", and the entire album "Point of Know Return". One of their songs, "Miracles out of nowhere", has a 4-part fugue starting at 2:25. It ain't Bach, but it was a sincere attempt to cross rock with Bach.
Similar story for Jethro Tull's album /Songs From the Wood/, which is super-famous with critics, but I don't recall hearing any of it on the radio.
There's a very obscure song by Jonathan & Charles, "Mrs. Chisholm's Weekend" (1968), which is in the same vein as "Eleanor Rigby" (1966), but had the misfortune of being released on a Christian album. There are also a few songs from that period by the Christian rock musician Larry Norman which could have done better in the mainstream, like "I'm the Six O'Clock News", and maybe "So Long Ago the Garden" (1973), which sounds like a missing link between beat poetry and rap music, though I doubt any future rappers ever heard it.
(Take that "highly influential on Scandinavian metal" with many grains of salt. That's just my guess, based on some guitar-solo similarities with melodic death metal. They obviously influenced Dethklok, musically and visually, which is however a fake Swedish melodic death-metal band, which however probably sold more albums than most actual Swedish melodic death-metal bands.)
Frank Zappa is generally more vulgar than I care for, but "Muffin Man" was the song that made me want to get better acquainted. Give it a couple of minutes to actually get going... :)
Frank Zappa and Tom Waits both did interesting weird stuff back then. But, again, not top-40 hit material.
I'll throw in Blue Oyster Cult's The Red and The Black. It's a banger. The lyrics are about a man fleeting the Royal Canadian Mounted Police but really it's the music that gets you going.
In an alternate universe where a 10-minute rock song could become a smash hit, I think Green Grass and High Tides by The Outlaws would have.
I also dig Bad Time by Grand Funk Railroad.
Thanks for recommending that Outlaws song! Nice!
I discovered Green Grass and High Tides through one of those music games, I think GH3. Amazing song.
Re. the recent announcement that NYC will provide free education for 3-year-olds in 2023, and the $3.5 "stimulus" bill that passed in the US Senate on Aug. 12, including some large amount for free preschool for 3- and 4-year-olds in America. (I can't seem to find out how much, other than that it's part of a $726 billion allocation, and that an "expert" predicted preschool for 3+4 year olds would cost $60 billion (yearly?)).
Scott, or maybe someone else, has written posts showing that the consensus now is that pre-school for 3-year-olds has no lasting educational benefits.
Now Vox has an article (https://www.vox.com/future-perfect/2018/10/16/17928164/early-childhood-education-doesnt-teach-kids-fund-it) saying that it has no lasting educational benefits, but has lasting health benefits. It says that the studies showing no lasting educational benefits used control groups, while the studies showing lasting health benefits were longitudinal studies mostly without control groups. It suggests that the health benefits were due to the actual health-benefit portion the pre-K education, and maybe the educational part, which is much more expensive, isn't useful.
So it sounds to me like scientific consensus at the moment would say that Congress is throwing a lot of money away on preschool for 3 and 4 year-olds, or at best betting a lot of money on a hypothesis, blowing right past the US budget cap before we've even started on the $10-to-$100 trillion New Green Deal.
Would anyone care to elaborate on this?
I imagine there would be some benefit to parents too.
Congress is bribing the teacher vote.
X to Doubt, dude.
They'd just fund their pensions if they wanted to do that.
I assume that preschool is to let parents work more than for the benefit of the children?
The US has thrown away a shit-ton of money on useless pre-school education for decades, it's not going to stop now. It's a lot like "infrastructure" bills to address "our crumbling roads and bridges," which pop-up pretty reliably about once a decade or so. (You'd think at some point the voters might say: WTF? we voted for $x billion less than 10 years ago for those God-damned crumbling bridges, where did it go? Either the bridges should be brand-spanking-new now, and we don't need more $billions, or else whoever disbursed the money 10 years ago should be dragged into the public square and hanged. But they never do.)
1) set of repaired things may be different, noone expects that it funded repair of evry single bridge
2) some maintenance/repair every 10 years is not unusual (sometimes it is required more often due to floods etc)
Interesting hypotheses. Doesn't match my experience in the slightest, but to each his own.
Can you link some evidence that they repeatedly claim to repair the same bridges again and again, more often than in other countries? And that repeats on largeer scale - not just with some specific bridge?
Nope. The only evidence is my memory. I've heard this spiel for 40 years, and it's always the same. I've never heard any politician tell me "welp, we fixed all the Interstate bridges in the Northeast last time, it's the Midwest's turn now" or "OK, we fixed up Amtrak last time, they're good to go, so no money for that in this bill."
It is 100% normal that Northeast has bridges with maintenance/rebuilding required in various times.
Bridges are build at different times, requiring differing service intervals.
My city in Poland right now has two bridges in maintenance, last year another serious rebuilding was completed, new railway bridge is under construction. While many other bridges are slowly aging and will require costly repairs in coming decades.
We are definitely not rebuilding all bridges in one year on each century. That would be bad idea for many reasons.
AFAIK this is 100% normal, though some grift/stealing/misspending hidden in this is possible.
There was an article recently (can't remember where it was) suggesting that big federal infrastructure transfers to states just make states spend less of their own money on infrastructure. So the infrastructure spending stays the same but with different state-to-federal funding ratios.
Even more than most "education," preschool is pretty transparently just childcare with some baseline attempt at quality control (background checks, actually interacting with children rather than setting them in front of a device, etc).
The nation probably has some interest in more parents being able to go back to work after having children within 3 - 5 years, as opposed to in 5 - 8 years, and also to have more than one child. Grandparents are increasingly continuing to work until they're too old to do a decent job raising grandkids either, grandkids are coming later, and they often live too far away anyway. Is it a strong enough interest to justify the cost? I'm not sure. Our district is already short elementary educational assistants, and it's not clear we could find the "teachers," even at a decent wage. I'm not sure what the situation in New York is.
I'm somewhat doubtful that government sponsored, standardized, acceptable childcare at that age would be significantly less expensive if they didn't bother with the educational component. The workers would still have to do something to prove conscientiousness, safety, and generally that they were responsible adults strangers should trust their children with, which isn't cheap.
I would theoretically prefer an extended family culture where there's always someone in the house minding the young children, but my revealed preferences contradict that.
Echoing in on the point about how hard it is to find qualified workers for these positions: Currently, to do pre-K childcare you need to both be happy to work with kids, be willing to do so for less money than you could make as a teacher/teacher's aid/someone inside the official education system AND have a clean enough record that you can pass a background check. This trifecta just doesn't exist in most cases.
The expensive component is the number of staff required, though inner city rents can also contribute. Reducing the credentialing required for the staff can reduce costs, though, and I'm doubtful the credentialing has much if any correlation with the ability to teach pre-schoolers (though this part obviously depends substantially on which jurisdiction we're discussing)
Re: credentials for childcare staff, I don't know what the situation is like in the USA, but over here there are more and more standards being set for what children should be achieving.
Here's a link to the standards manual for Early Childhood Education, which covers from Birth - Six Years of Age for the settings
Full and Part-time Daycare
Childminding
Sessional services
Infant classes in primary schools
https://siolta.ie/media/pdfs/siolta-manual-2017.pdf
This is the kind of stuff that is being more and more considered a necessary part of children's education. It's no longer a case of "the parents are working, the kids need to be taken care of, for a group of three year olds just let them play with toys, learn how to get on with other kids, and since this is age Biting happens train them out of doing that", the kids and the childcare provider/preschool has to be hitting all these sorts of goals (e.g. as below):
5.2.8 How do you enable the child who consistently plays alone to interact with other children?
5.2.9 In what ways are children facilitated to work together in small groups?
If I remember what I was like at the ages of 4-6, I would have *hated* being chivvied into "interacting with other children" when I was quietly and happily playing alone with blocks or whatever, but we can't let kids be solitary! we must make sure they are sociable! and learn to interact! so they can be productive and efficient workers!
On the other hand, credentialling is here being used as a proxy for social class and overall trustworthiness. Call me a snob, but I don't want my kids taken care of by the kind of bottom-of-the-barrel employees you'd find at McDonald's or the TSA.
It's not "education", it's babysitting so parents can work. See e.g. Bernie Sanders' pitch for universal pre-k - it's like a thousand words about affordable child care, with one throwaway line about unspecified "well established" benefits. Look at it as a socialist program to provide a service to working-class families, paid for by taxing millionaires and billionaires, and it makes sense (whether or not it gels with your values is another question entirely).
In addition to helping out working class parents, I also think there are benefits to the children, not in educational terms but in terms of socialization and normal development. It is really not natural to be cloistered with a hovering adult looking over a kid 24-7 who puts that kid's interests above all others. Kids are supposed to be running in a pack with other kids, learning to socialize and bump elbows and get in conflicts and work them out. Back in the olden days, that could happen just within families or extended families because people had so many more kids -- you couldn't give them all attention. Now it happens with daycare or preschool.
I well remember the first day of kindergarten, how the kids who hadn't been in daycare or pre-school previously were the ones crying in the corner, needing to be comforted by the teacher because they were terrified. The daycare kids were all having a great time. A parent looks out for their kid's own interests above the other kids, and that's not great to be marinated in that situation for 5 years straight when it isn't how society works. Kids that aren't socialized young are much like all the COVID puppies that didn't get socialized -- anxious and neurotic and not as resilient, when they aren't in perfectly controlled and safe contexts.
I am someone who is prone to rather extreme introversion and wanting to be by myself indulging my cerebral interests. If I had been alone in my mother's care my first few years, I would've spent all my time reading and playing alone (rather than just doing that on weekends and evenings) and I'm sure I would have become a serious weirdo with social problems and anxiety. I completely credit being only a marginal weirdo and being able to get along in society with being in daycare in my foremost years, and getting used to having to socialize, and having a minder who did not prioritize my interests either above or below those of the other children.
I broadly agree although I would point out that kids "running in packs" in extended families or communities usually happened within mixed age groups, so that the older/more experienced kids would lead, establish and maintain norms, point out dangers, and guide the younger kids through their exploration of the world. This probably improved outcomes for the younger kids, while giving the older ones experience in leadership and parenting. This is how it was for me growing up around my sisters and cousins.
Also, there are differences in impact between supervised and unsupervised time, and between structured and unstructured time. Daycare/pre-K leans more towards supervised and structured, which is fine to an extent, but I get the sense that kids in general - at home and at school - are getting less and less unsupervised, unstructured time. This is bad because they need this time to develop executive function (by deciding what to do with unstructured time, individually or collectively). Of course daycare/pre-K "curriculums" can build some unstructured time into the day, but now we're getting to the details where the quality of the program matters a great deal.
Yes, I fully agree with this. And even in schools, it didn't used to always be so age-segregated...in one-room schoolhouses, etc., you had kids of all ages learning together and helping with the younger kids. I don't know why we have such strict age-restriction nowadays, with large groups of kids all exactly the same age, which just breeds a lot of competition and status games rather than more natural age-based leadership and social hierarchies that better reflect the adult world.
I had a pretty ideal daycare situation as a kid, where a home-based daycare provider was watching a group of about a dozen kids ranging in age from babies to about 10 or 11, and we all played together and had a lot of unstructured time just running around outside or whatever. It was wonderful and I completely credit it with my social development, which I would not have received at home. I think some years later the state came down on her and required that she implement strict minder ratios and more age segregation, which is too bad.
This is pretty much my take on it too. Those pushing for universal Pre-K don't seem to care if the educational benefits are real or not, but they do care a lot about freeing up individuals (mostly women) to get back into the workforce. They seem to have a strong understanding that babysitting services being offered for all is not as popular or justified as better education for small children. That they can help save some kids from 24 hours a day in really crummy homes may also be a factor.
You might see this as a socialist program, I see it evidence that the values of capitalism, consumerism and materialism are fully baked in to progressivism. This is Peak Capitalism. The state will now take care of your children even earlier so you can get back to the important part of life: being a dedicated worker bee.
As a frequent critic of capitalism, I appreciate the cynicism, but I wouldn't take it for granted that parents would freely choose to raise their own young children full-time even if we lived in some kind of post-capitalist utopia where machines made all our stuff for us. Some people really love parenting (I'm one of them - I'd be a full-time stay at home dad if I had the means, and working from home during the pandemic really cemented that impression, which I gather is not a common reaction), but most people seem to do it because they have to, or because it's what's expected of them, or because they sort of fell into it by default. Many parents need time away from their children, and just because it's often the men who actually get the time away, that doesn't mean that there aren't also women who would rather go out into the world and do something - get an education, work a fulfilling job, or even just enjoy recreation - than stay home with kids for 5-6 years until they're ready for kindergarten.
This is to say, I think even a well-organized socialist society freed from the constraints of a materialist, consumer-driven economy, would provide for some kind of collective childcare in order to give parents some personal time. This would just be a recognition of the fact that raising children is a difficult but necessary job and so society should take on some of the burden from the individual.
The classic statement is that it takes a village to raise a child - but modern people mostly just don't live in villages any more (or in the kind of extended-family communal households that many Bay Area rationalists do have) and so raising a child these days either requires expensive childcare, or the parents being with the child full time, as opposed to the more traditional options.
The question is whether the childcare support is provided by family and friends (the traditional way), by the market (the capitalist way), or by the government (the socialist way). It would be good to enable more people to live in such a way that the traditional means are readily available, but for better or worse, our society has developed in a way that puts that out of reach for most people, even more than the capitalist cost of childcare.
Sure, but I think what LesHapablap and myself both lament is that there isn't the choice; some people would love to stay home and take care of their kid(s) at least until they were four years old and ready for primary school, but there's not the opportunity - both parents need to be working nowadays in order to pay mortgages etc.
Which is how you end up with the absurdity of the mother (usually) working a job where the majority of her pay goes on childcare so she can work a job.
This. Also, another cynical angle - a word where a mother works to pay or the costs of childcare is one with a higher GDP than one where the mother stays at home to look after her own children, since the work of looking after children is now visible to the state.
I see this a lot, but this is just gains from trade. You can make the same argument about getting a job vs. growing your own food or building your own home just pushes money around and makes GDP look higher.
But if I'm a better programmer than I am a farmer/butcher or construction worker, then I'm actually materially better off making the trade and society is better off if people do this in general, and I've never seen the argument why this doesn't apply to child care.
It isn't the same argument unless professional child-rearing, e.g. in the preschool setting, is superior to what the child can get at home -- which nobody at all believes. So the situation is more similar to a peasant who has to plow a field with a stick, inefficiently, because he can't save up enough money to buy a good plow (and horse). In this case, the family can't supply enough saved money (e.g. through one parent earning enough for all) to buy the most efficient and effective possible child-rearing -- which is done by one parent.
You are "actually materially better off" if you drink a government-provided slurry with all your necessary nutrients instead of cooking yourself or buying food from restaurants. And society would be better off too. And you can use your extra pocket money to bid on a house someday. Actually you may have to take the government slurry in order to afford the house, since everyone else bidding on the house has already done so.
Nutrient slurries are not actually that healthy, in practice
So the metaphor holds then!
Whether it's a real benefit verses a shadow benefit will depend on several factors. One, is how many children can be "properly" cared for by an individual. If a daycare worker cares for approximately the same number of children as a parent, then that's not adding much on the daycare side (the parent may be doing more economically valuable work than the daycare worker, so that's potentially important still). I am scare quoting "properly" because I have significant doubts that the kind of upbringing a child can get in daycare is equivalent to what they can get from a parent who is regularly with them. Obviously huge caveats here, and some children are better off just being taken away from their parents, so there's certainly some room there to justify childcare no matter how reasonably poor the daycare is.
I'm more concerned about the quality of industrialized daycare than I am the current implementation, to be sure. Right now, the parents can make an economic choice about whether their productivity in society is greater than the cost in terms of day care. Put simply, if they make less money than daycare costs, they stay home. Generally, they would (and should!) stay home even if the costs are anywhere close, like making $20,000/year and paying $12,000/year for day care. Once you factor in costs related to work, like additional clothing, driving costs, etc., that ratio really isn't worth it. That's not even considering the value of spending time with your own kids.
Excellent point, and I appreciated Andrew Yang's efforts to popularize the idea of trying to measure this type of value and optimize for it rather than just for GDP. There's a bit of a streetlight effect where optimizing for GDP happens because it's easier, not because it captures the overall well-being of society.
Smart Codex Readers: Convince me that COVID Vaccine Passports are a good Idea.
Currently against them as I find them
1) counterproductive and dissuade trust,
2) discriminatory and segregationist
3)Onerous for implementation
4) Seem to discount natural immunity
5) Inevitable Privacy Concerns
6) No Methodological Target for Removal . Fear that passport requirement will shift from unvax to 'undervaxed"
7) Herding Effect where unvax further congregate among themselves
8) Divisive and leads to distrust of public health institutions.
9) Seem to ignore majority of exposure occurs inter familiar and scape goats public spaces
10) COVID vaccines are not sterilizing making vaccine passports essentially worthless.
11) my libertarian bias where this feels wrong.
Looking for arguments in favor:
Should Hawaii where I live implement this policy?
Does a temporary health vaccine passport potentially raise vaccine rates, improve "public perception of safety" , lower hospitalizations/deaths divisive policies that could be ineffective, counterproductive and possibly lead to worse health outcomes and trust.
Thanks!
Would your "libertarian bias" feel better if instead of yes/no admission and vaccine passports, that businesses simply implemented separate sections and differential pricing?
For example, I am absolutely positive that if airlines offered "vaccinated only" flights, that (1) LOTS of people would be willing to pay more to be on those flights, and (2) if the airline determined to charge more for fare on the unvaxxed fights to offset their risks in serving those passengers, it would motivate more people to be vaccinated. Is that somehow better in your mind then if airlines just simply started requiring vaccination passports for all passengers and stopped serving the unvaccinated?
I don't find any of your other points compelling. I see no privacy concern or slippery slope with showing people and businesses that you desire to interact with a simple verification of your vaccine status. Nor do I care if unvaxxed people feel that it is divisive...given that they are the ones acting like petulant anti-social babies, I could care less. The rest of us don't want to be around them. Just like in the 80s and 90s, society decided they didn't want to be around smokers, so that no one can smoke anywhere nowadays but their own home or cordoned-off smoking sections.
My general reaction is that EVERY time there is any sort of change at all in public policy...whether that is making DUI a felony or requiring seatbelts or issuing social security cards or making bars and restaurants non-smoking or updating a user interface or literally *anything*, ever, people yell and scream and complain and gnash their teeth and then the change happens and everyone gets over it and forgets about it two weeks later. The unvaccinated are acting that way and making a huge ado about nothing, at best, and at worst making a terrible health decision for themselves merely out of spite and wanting to stick it to libs/elites/whoever they think they're sticking it to.
I would imagine that a place like Hawaii, which is hugely dependent upon tourism from the affluent and older crowds (who are the most vaccinated people), would benefit from implementing the policy. The cosmopolitan frequent-travelers with lots of disposable income are not the people making up the anti-vax crowd.
I know its a meme that the unvaccinated are dissenting to spite you, but we really aren't. There are 2.5 as many reported deaths in the VAERS database for COVID vaccines as every other vaccine in the database combined (4,831 vs 1,919). Maybe that doesn't worry you, but can you at least admit it is reasonable for a person to worry about it?
Block is correct that there are a bunch of reasonable-sounding fellows headed by Brett Weinstein going around telling people that "the vaccine" is dangerous (and not making an effort, when I watched them in June, to distinguish between the 4 available vaccines in the two countries discussed).
I am now convinced that their apparent reasonableness is only skin deep, as it seemed like they never considered any alternative hypotheses before concluding "vaccines dangerous", and even at that time they were misrepresenting some data they had access to in obvious ways.
Here's the review I started on 3 hours of claims: https://www.lesswrong.com/posts/7NoRcK6j2cfxjwFcr/covid-vaccine-safety-how-correct-are-these-allegations
Is what you consider "reasonable" really rational, though? I guess this gets back to the whole rationality thing and underlying basis for ACT. You present two sets of numbers and say, "Hey, look at the difference here between COVID vaccines and the other vaccines!" Sounds like a reasonable to at least note the difference in rates. But you posted these two numbers without putting them in the context of any other data. If I put that 4,831 number into context of 190,000,000 people in the United States who've received at least one dose fo the vaccine that 4,800 number seems a lot less concerning.
Just looking at the US numbers though shows that we have an overall 1.7% chance of dying from COVID-19 infection and a 0.0019% chance of dying from one of the vaccines. So that makes it 895% more likely to die from the disease than from the vaccine. Now, you can burrow into those basic numbers and argue that IFR is lower because of all the undiagnosed cases, but still that difference is pretty damn big. It's hard to not to argue that your chances are better with the vaccine than a COVID-19 infection.
So, to answer your question, I don't think it's a reasonable thing to worry about, because I think your cognitive bias against the COVID-19 vaccines is making you latch on to data that puts them in a negative light.
https://www.cdc.gov/.../vaccines/safety/adverse-events.html
I take your point, but your analysis leaves out a few things too. You assume a binary choice: "Either I get the vaccine or I will catch COVID" but you might not actually ever catch it. The choice is between a small danger today vs the possibility of a larger danger in the future.
Even that, however, assumes near 100% effectiveness of the vaccines, which at this point is clearly untrue. In my home state (Vermont), around 25% cases in the last couple of months were among vaccinated people. Also, its a small sample size, but the death rates have actually been higher in breakthrough cases than the overall rate. All of which muddies the water enough that I still contend it is reasonable to at least wait for more data.
You can believe whatever damn fool thing you want, but numbers argue against what you believe.
1. Current estimates are that Delta has an R value around 7. This number is based on in vitro studies (rather than epidemiological surveys), but that puts R value of SARS-CoV-2 3x higher than the 1918 H1N1 flu. And that's 2x-3x the R value of the common cold. You've gotten the common cold, haven't you? You've gotten the flu, haven't you? Why do you think you can avoid catching COVID-19 unless you have special powers that exempt yourself from the laws of probability?
2. Or maybe you're sitting by yourself in your survival bunker waiting for the Zombie/COVID apocalypse to pass. But SARS-CoV-2 is a Coronavirus, and in the past Coronaviruses haven't shown themselves to be amenable to herd immunity. COVID-19 will likely become COVID-22, -23, and -24. It might mutate into a relatively harmless cold-like virus. Or it might get worse. We'll see. But when you run out of dried beans and beef jerky you're going to have to come out your bunker and deal with people who might be contagious.
3. The fact that vaccines are less effective against the Delta variant is actually more reason to get vaccinated — because the vaxxed now have a higher probability of being contagious. So, you won't be able to depend on the vaccinated to protect you.
4. Also, from a mathematical perspective, if Vaccine X is only 80 percent effective against Variant Y, and 80 percent of population has been vaccinated with Vaccine X, and 20 percent of the population remains unvaccinated, then the pool of potential breakthrough cases and pool of unvaccinated are of equal size. So, you'd expect 50 percent of the cases to be in the vaccinated population and 50 percent in the unvaccinated population. The fact that Vermont has a 25% breakthrough rate against Delta still shows that COVID-19 is preferentially attacking the unvaccinated. If you look at the relative percentages hospitalized, if Vermont is like other states, probably only 3% of the hospitalized for COVID-19 will be from the vaccinated population.
>You've gotten the common cold, haven't you? You've gotten the flu, haven't you?
I don't know about Block, but I've never gotten the flu, and I'm no youngling.
More importantly, "the common cold" is about two hundred immunologically distinct diseases with nearly identical symptoms, that cycle in and out of circulation as herds become mostly immune to the last one. Most people will at some point catch *a* cold, very few people catch *all 200* colds.
"The flu" is a couple dozen immunologically distinct diseases, and ditto.
COVID-19, unless something weird happens going forward, is basically one disease. Several strains but they're immunologically related and one is clearly dominant. So your intuition is off. Most people contract a minority of circulating colds and a minority of circulating influenzas, which suggests that the probability of contracting any one specific disease is <<50%.
For the average vaccinated person, the odds are pretty good that they will someday catch *a coronavirus*, but also pretty good that they will never contract Covid-19. Most likely, the coronavirus they catch will be one of the ones already included in "the common cold".
I'm sure some people are legitimately afraid of the vaccine, but plenty of others are in fact doing it out of spite and because "they don't like being told what to do." I personally know some people in that camp.
I think even if it could be validated that those 5,000 people actually died from the vaccine and not something else, that is out of 200 Million people and certainly much lower than serious illness, long covid, or deaths from Covid itself. And likely caused almost entirely by anaphylaxis, which could be avoided by waiting an hour after getting the vaccine with a location where there's an Epipen available. I get allergy shots once a month and have to bring an epipen and wait half an hour before I can leave the Dr's office each time...not a big deal, and reasonable any time a severe allergic reaction could occur.
Look, I had pretty severe side effects from my first dose, which I was NOT expecting...3 days feeling like a bad flu, full body aches, severe headache. I was scared to get the second one. But the second one was actually easier than the first (perhaps just because I was mentally prepared) and now that I'm through it I am VERY happy to be vaxxed. Furthermore, it isn't about fear of dying, as I think the chances that I would die of Covid are about the chance I'll die from a lightening strike...I am healthy, skinny, and under 50. But it's still worthwhile to know I'm reducing my chance of keeping it spreading, putting elderly people at risk, and reducing my chance of feeling like crap for two weeks straight or getting long Covid. If everyone had just gotten the shots this spring when they were available, we would be very unlikely to be dealing with this current surge, new mask mandates, and people in the ICU, which would be a good thing.
" everyone had just gotten the shots this spring when they were available, we would be very unlikely to be dealing with this current surge, new mask mandates, and people in the ICU, which would be a good thing."
I recommend you look at the data out of Israel to see how this is likely false. They are 80% _+vaccinated have have the most hospitalizations ever recorded during pandemic.
Similarly in Hawaii we never ended restrictions, no freedom day for us for mask s etc with nearly 83% of adults vaccinated we're having the most cases ever.
In regards to mandate - i have no issue with private businesses (Your airplane comparison to smokers) doing their own thing. I 'm still against the mandate from the government as in my personal experience it degrades trust with communities especially taking into the colonial account of the island..
i work in public health vaccination drives and have heard every fear, question, concern possible - its a very complex issue.
the best way to convince an unvaxed individual I have found is the following;
'The known current risks associated with covid are greater than the risks associated with the vaccine. the disease is endemic and you will meet it in the future. Your choice is to meet it vaccinated or not. Being vaccinated will make it easier and milder.'
shaming does not work. Love, empathy, understanding, harm reduction models are more productive.
Israel is apparently reporting only 39% Pfizer vaccine efficacy, which is pretty bad but certainly better than nothing. But it's still relevant that it has 88% effectiveness against hospitalization and 91% effectiveness against severe illness: https://www.cnbc.com/2021/07/23/delta-variant-pfizer-covid-vaccine-39percent-effective-in-israel-prevents-severe-illness.html
Given that there is a 5% rate of side effects that "prevented daily activities/work/required a medical visit" for each dose, some people are suggesting that the vaccine doses that were chosen were much too high; I even saw some report on Twitter showing good vaccine efficacy given 3 tiny doses (like 1% of a normal dose). Given that small doses would probably reduce these side effects and allow the entire world to be vaccinated, I hope small doses get more attention, quickly.
(source for the 5% figure is the vaccine survey offered to all Canadians [AFAIK] when getting a shot: https://canvas-covid.ca/results )
Going through your points in order:
1. Not sure what you're talking about here. If you only want to allow people with vaccines to travel, a vaccine passport seems tautologically like it couldn't be counterproductive. As for dissuading trust, I mean, maybe, but Trust but Verify has been an iron law for a while now. Wouldn't you feel more comfortable knowing that people coming to your home were way less likely to be infected under this new regime?
2. Sure... but discriminatory against a body we want to reduce. That's like saying welfare programs are discriminatory against the poor, since hopefully it would make less of them. Something being discriminatory against a population isn't inherently bad, murder laws are discriminatory against the population of would-be murderers. (not saying people who don't get vacced are like murders, just making an analogy.)
3. Onerous... maybe? Setting up a national DB or giving out some kind of ID doesn't seem that onerous, but at scale it probably could be.
4. I guess, but I don't think anyone cares about stressing the capacity of natural immunity. As a society, we don't gain value by giving a shit about natural immunity.
5. In what sense? If you've been vaccinated or not? I don't see a major privacy issue here.
6. Vaccination status is currently a binary, I'm not sure what you're point here means.
7. Ok, but the opposite of that is unvaccinated people moving more freely throughout all of society which is also bad. The existence of people without immunity anywhere is the problem, not where they happen to be located.
8. Maybe... but this can be used to argue for or against any point. If the existence of controversy is enough to defeat any idea then nothing happens.
9. Citation? I haven't heard this. It also seems self-negating (assuming families are closed circles, how does the infection first happen?)
10. fair dinkum, mate. I think the vaccine reduces the duration of potential infection, though, correct? So it presumably decreases risk, but I'll give this point to you.
11. ok. My communitarian biases don't, but we can't fight our initial impulses.
Overall I think that the cost here is pretty little, the potential gain is sufficient (both in terms of reducing the potential number of infected people traveling and increasing trust from people in the areas being travelled to that their visitors aren't infected) that it's worth doing.
On July 21st, after a surge in cases (~10k% increase) caused by unvaccinated tourists, Malta started requiring proof of vaccination for all travellers (previously a negative PCR test was sufficient). At that point, the 7-day average case count was 203. One month later, it was down to 62. So vaccine passports do seem to work better than PCR tests, and the alternative of shutting down international travel altogether again would seem less liberal.
> One month later, it was down to 62. So vaccine passports do seem to work better than PCR tests
Or maybe people cancelled their plans to travel to a place that had a ~10k% surge in COVID cases. That also fits the timeline you laid out.
I don't understand what mechanism you are proposing where the purported decrease in tourism would lead to R < 1. I believe that the mechanism I am proposing, where keeping a low proportion of unvaccinated in the population leads to R < 1, is generally accepted.
Tourists by definition travel to many places and have contact with many people in the places they visit, so they are a vector for spread within a region. R = (# of contacts per unit time) * (probability of transmission per contact) * (duration of infection).
Vaccines affect the second and third factors, but tourism influences the first factor. So if tourism tanked you'd still see a drop in cases, even without any additional restrictions on who can visit.
For example, if Malta typically has 10,000 people visiting per day, but 9,000 of those cancelled their plans because of the surge you described, don't you think that would have a noticeable influence on R? I don't know what the actual numbers are, but it's a hypothesis that also fits the information you've provided.
I haven't heard of any cancellations based on fear of COVID, only of cancellations because of the restrictions (i.e. by those without vaccine passports). This lowers the probability that such cancellations are widespread. I suppose that in a couple of weeks, when the airport releases its August numbers, we will be able to rule out definitively the hypothesis that tourism declined by 90% between July and August. We already know that unvaccinated tourism declined by 100%.
Do you mean an actual passport, meaning a document that lets you travel interstate, or internationally? I'm a little bemused that that idea is even controversial, at least for international travel. When I was a kid I had occasion to travel (with my family) internationally, and it was routine that you needed documentation of certain vaccines to go to certain countries. I vaguely recall having a little yellow book, some kind of "International Vaccine Certificate" that my mother carried around with my passport. Different countries required different vaccines, and I remember one country in particular cost me about 3 separate visits to the doc's office to get stuck with needles. Boo.
Anyway, I think the concept is only really new and surprising to young First World people who have grown up for the past 30 years or so in a world in which infectious disease seemed to have been thoroughly vanquished. (Surprise! Turns out it wasn't...Mother Nature is a sneaky bitch.)
What you may want to ask yourself is: would your philosophical reasons stand, if COVID had a mortality rate like the Black Death (say 60-80%)? That is, if one person sneaking in from some other ravaged country could cause 80% of Honolulu to snuff it within a 4 weeks, say, would you still be firmly opposed to burdening that poor soul with the necessity to get a vaccine if he wanted to visit, and prove it? If your answer is "hell no!" then your opposition isn't really philosophical at all, it's practical -- you're just saying *in this particular case, for this particular disease* the civil liberties infringement is not worth the benefit to public health.
Unfortunately, if the core argument is practical, then reasonable men can disagree about the exact cost/benefit ratio required to support or oppose the idea, and it comes down to a lot of gritty detail, much of which is actually unknown because we lack the needed data.
1) Counterproductive how? Do you think that they make people less likely to get vaccinated, instead of more? Real-world examples prove that vaccine mandates increase vaccination rates: https://apnews.com/article/europe-business-lifestyle-health-travel-1d10271c4f1617521892d49d83b773ad
2) Yes, but this is tautological.
3) Not at all. Everyone has smartphones, it's not hard. Plenty of places have implemented QR code passes.
4) "Among Kentucky residents infected with SARS-CoV-2 in 2020, vaccination status of those reinfected during May–June 2021 was compared with that of residents who were not reinfected. In this case-control study, being unvaccinated was associated with 2.34 times the odds of reinfection compared with being fully vaccinated." https://www.cdc.gov/mmwr/volumes/70/wr/mm7032e1.htm?s_cid=mm7032e1_w
5) There's not really a way to quantify the value of privacy. But consider: how protective were you of your vaccination status (MMR, etc) in 2019?
6) This is a slippery slope fallacy.
7) The unvaxed would also be incentivized to get vaccinated.
8) Why would a private entity, say a bar or restaurant, implementing a vaccine mandate have any bearing on the public's trust in a public health institution?
9) So then you'd be in favor of them in high-risk areas, like indoor dining?
10) I'm not sure what you think sterilizing means.
11) It's good that you're upfront that this is your bias, but maybe try thinking about it this way: wouldn't it be governmental overreach to ban a private business owner from being able to run their business how they see fit, including their choice on requiring proof of vaccination? Why don't you value the freedom of business owners?
Re: Hawaii, only about half of the population of Hawaii has been vaccinated. All else equal, would the world be better if no other Hawaiian got vaccinated, or if every remaining Hawaiian got vaccinated? Again, all else equal. Forget the yeah-buts, forget that but-what-abouts, just all else equal. Is it better with no more vaccinations, or everyone getting more vaccinations?
Thanks for the points.
1. Note on Counter Productive Nature of Vaccine Passport:
By creating division between both vaccinated and unvaccinated populations vaccine mandates public trust in institutions is damaged. Personally as a member of vaccination clinics I have met several individuals who are unvaccinated and are doubling down against vaccination because of loss of trust in government and feelings of coercion. Future health issues could be affected by this loss of trust. In Hawaii history of colonialism etc further complicates communicating and establishing trust with the most unvaccinated. Further loss of "privileges" and the carrot of a mask free society + the fear of booster infinity make it hard to connect.
Counter research to your note Research from across Europe shows that compelling people to take vaccines does not necessarily result in higher uptake of vaccines. Further, statistics show that the UK has some of the most positive attitudes towards vaccines across Europe, and that the top 5 European nations for positive attitudes towards vaccinations all have voluntary vaccination policies. The European nations with the most negative attitudes towards vaccinations include those with mandatory vaccination policies: Hungary, Slovakia and Croatia.
Here are two links by exploring the counter productive nature of passports:
Vax Passports Are a Bad Idea
— Prasad explores the balance of possible (but uncertain) health benefits and social harms
https://www.medpagetoday.com/opinion/vinay-prasad/92107
Covid-19 vaccine passports will harm sustainable development
British Medical Journal
”Vaccine passports interfere with that future as they create a structural barrier to sustainable development, benefiting only the few at the expense of so many.
https://blogs.bmj.com/bmj/2021/03/30/covid-19-vaccine-passports-will-harm-sustainable-development/
2. Yes tautological but currently the most under n Hawaii the most under vaccinated populations are Filipino, Hawaiian American,and Black communities compared to Asian / White populations. In similar In New York CIty close to 60% Black Americans have not been vaccinated (https://www1.nyc.gov/site/doh/covid/covid-19-data-vaccines.page). One cannot support a policy that discriminates and segregates differences in vaccination by gender, race, or socioeconomics.The vaccine passport may serve to perpetuate inequality, segregation and structural racism.
3). Not everyone has smartphones. Age issues make it further complex. I work in public health and technology is a major issue with under vaccinated populations. Luckily the over 65 we've hit probably close to 87% vaccinated so this seems mute.
4) There are several studies comparing natural vs vaccine infection and is complex. I agree though looks like getting someone who had natural immunity is benefited from vaccination.
5) My Vaccination Status in 2019 was on a old piece of paper. My daughter's is digital. The COVID vaccine passport is presented everytime you enter a restaurant/etc and linked to a digital system.
6). The entire COVID panedmic has been a slippery slope. To me I could accept a vaccine passport if it had a clear end date / end target (say 80% Vaccination etc) and review.
7) Herding leads t o echo chambers. Having unvaxxed see vaxxed living healthy post vaccination is best way to convince them against their fears at least in my opinion. I'd like to see data from NYC on how their vaccine passport program is affecting vaccination rates.
8) Talking to unvax individuals, there are flash points of aggression. IE: ustomers angry, new labor costs with staff/ownership of not wanting the role of health bouncer/enforcer with potential for aggression & flash points at premises doors. Few want to operate in a checkpoint society where you need to show ID/Vax cards daily. Similarly individuals who the very same people who just months ago claimed voter ID is racist because minorities don't have IDs are suddenly acknowledging everyone needs ID creating further distrust.
9). I think employer mandates and targeted vaccines drives are more productive. Most contagion occurs within the household unit and less cases are from restaurants/public spaces in contract tracing data from Hawaii.
10) Sterilizing in vaccination means that it cuts transmission. Vaccinated individuals are spreading COVID making vaccine passports essentially worthless. If both vaccinated and unvaccinated individuals can spread disease then there is limited logic in such an intervention...
11) Yes. I have no problem with individual business decisions to mandate vaccine requirements. I find the top down approach in effective.
>>
Re Hawaii: Hawaii has 69% of general population vaccinated with 1 shot , 62% with Full. This is for all ages. Eliminate the kids and we already have the majority of at risk individuals vaccinated so to be targeting a vaccine mandate program that would benefit the wealthiest individuals on the island who are already vaccinated probably is a distraction from other interventions that could be more effective.
Its funny the strongest argument FOR a vaccine passport appeals to the left authoritarian in me: A successful vaccine passport system could provide the government an opportunity to nudge other health outcomes. For example, individuals with a high body mass index could be limited in the types of restaurants they visit or purchase. Alcoholics could be restricted in entering bars. Similarly individuals with other communicable diseases could be tracked to ensure they do not cause spread within public settings expanding Vaccine Passport access points shifting.
Ultimately the disease is moving to endemic stage and the vaccines are serving more as a prophylaxis against serious outcome.
So as Tyler Cowen says : Just get vaccinated and live your life.
https://www.bloomberg.com/opinion/articles/2021-08-10/delta-straussians-know-how-to-live-with-covid?srnd=opinion&sref=htOHjx5Y
All Yeah-s Butts etc. Maybe since Hawaii likes to take its time we'll wait to see how the vaccine passports work out for NYC/SF.
> 8) Talking to unvax individuals, there are flash points of aggression. IE: ustomers angry, new labor costs with staff/ownership of not wanting the role of health bouncer/enforcer with potential for aggression & flash points at premises doors. Few want to operate in a checkpoint society where you need to show ID/Vax cards daily. Similarly individuals who the very same people who just months ago claimed voter ID is racist because minorities don't have IDs are suddenly acknowledging everyone needs ID creating further distrust.
I think this runs together several discussions that are often run together, but need to be separated.
Right now you need to show a drivers license if stopped by a police officer while driving. You need to show a proof of age (in practice, usually drivers license, but occasionally passport) if you want to buy alcohol or tobacco or cannabis. You need to show proof of employment eligibility (usually Social Security card, sometimes passport) if you want to start a new job. In many states you are required to show ID (usually drivers license, but some other forms are allowed) in order to vote. You need to show proof of vaccination (usually MMR and/or meningitis) if you want to enroll in elementary school or university. You need to show proof of identity to board an airplane (usually drivers license domestically, and passport internationally, with special visas and vaccination certificate sometimes required depending on origin and destination countries).
All of this suggests that we already live in a "checkpoint society" where you need to show ID/vax cards daily - just that most of the purposes are served by the drivers license and only a few require vaccination or passport.
Some people have pointed to this fact and said that extending the ID requirement to voting is a small enough imposition, because voting is a rare enough activity, and enough people have ID already because of the other things. The response that is usually given is that voting is an important enough activity, and the forms of ID that are required are usually enough of a hassle and cost to get, that it's not worth the benefit.
Putting vaccination requirements on stores, restaurants, and concerts would be making the requirement stricter. But the vaccination card is much easier to get than a drivers license (for instance, in most states, you can get a vaccinator to come to your address for free to give the vaccine, if you and four other people are willing to get vaccinated - and if not, you can still go to any CVS, Walgreens, Walmart, or many other common chains and get it for free, rather than having one location per county as with other IDs).
I think most current opponents of voter ID would drop their opposition if it were as easy for everyone to get a free voter ID as it is for everyone to get vaccinated. (Though I haven't actually checked whether getting vaccinated requires the same sort of ID that voter ID laws do, in which case this really is a problematic imposition.)
> In many states you are required to show ID (usually drivers license, but some other forms are allowed) in order to vote
That reminds me of how much ire I have for the people who decided showing ID for exactly one of [vaccine, voting] was horrible, but showing it for the other is completely normal.
> One cannot support a policy that discriminates and segregates differences in vaccination by gender, race, or socioeconomics.
(1) If they have not got the same opportunity to vaccinate then it is problem and should be fixed
(2) I do not care at all WHY people from country XYZ are unvaccinated. If they are not, they should be not allowed to enter my country.
(in cases of refugees etc I am fine with funding their vaccination on entrance)
To (7) I would add - the unvaxxed self-isolating seems fine. It still means they don't infect the rest of us.
I used my "vaccine passport" when I went into San Francisco the other night to go out to a bar and then to dine. It's a Q-code that I carry around on my iPhone that loaded from the state vaccination website. I showed my drivers license and my vaccination passport, and I was able to sit at the bar alongside other vaccinated people. Although that passport won't necessarily prevent me from getting a breakthrough Delta infection, at least I know the people around me are also taking precautions. It was very reassuring.
One worry I have, as a Texas resident, is that I will be locked out of these systems. I tried to get the California app and the New York app when I heard about them, but both of those apps say they can only verify vaccinations conducted at locations inside their state. I really don't want to have to bring an important paper document with me to bars when I'm traveling, but I worry that my state's anti-passport stance will make it hard for me to verify my status any other way.
They'll accept a photo of you paper vaccine card, along with a valid driver's license. I'm sure people from out of state will start gaming the system, but it wouldn't be the California peeps — who would be the majority of he customer base in most places except for some big tourist destinations like Disneyland. Las Vegas would be totally different story, though...
Good to know! I've favorited my vaccine card photos on my phone album now, just in case this comes up.
My understanding is that there are two main ideas for vaccine passport implementations.
The first are for travel accross borders (so for example people coming to Hawaii). With breakthrough cases, it seems like it wouldn't be particularly effective, so no comment.
Then there are things like what NYC is doing, and smarter ways of doing it like in some European countries, where it also accepts a proof-positive of COVID from over 2 weeks ago. Let's call these "bludgeoning passports." My understanding is that their purpose is really just to bludgeon the populace into getting vaccinated by making life much more difficult for the unvaccinated. I think it's pretty clear why this would be effective, so I think that if you value having the vast majority of the populace vaccinated, they're a good tool.
Now, I think the main people at risk of COVID from not having enough unvaccinated people are unvaccinated people, who would largely oppose this policy. But, for politicians who need to keep case numbers down and vaccination rates up in order to get reelected, it's clearly a valuable policy.
"Bludgeoning" is a strong word. State mandated arm twisting seems more accurate. And moreover it will probably work except for the never-vaxxers. In France, Emmanuel Macron announced that people would need to proof of vaccination to get into cafes and restaurants, and within a week 1 million citizens made appointments to be vaccinated.
Right now, Hawaii wants proof of vaccination before you visit. They might accept a COVID test and period of quarantine, but since I have my vax passport I didn't bother to investigate what the non-passported do.
I'm surprised governments have offered financial carrots, though. What if the Feds offered a $1,000 deduction off our taxes with proof of vaccination? Of course, states like NY and CA wouldn't have a problem providing proof to residents for tax purposes. Citizens of states like TX and FL would be left out of the cold. Which would put pressure on those states to implement vaccine tracking apps.
Vaccine passports only make sense in clinical settings:
1. Doctors/nurses should be vaccinated so that they don't spread COVID to their patients. Same applies to all other vaccines. Sorry but you don't get a choice if you're treating vulnerable people face-to-face.
2. During COVID surges that fill up hospitals it makes sense to use vaxx passports to deprioritize those who rejected the vaccine. It should be your right to reject it but then if hospitals are full you should be the last person to receive any treatment. Choices have consequences.
Other than that I agree that they're a waste of time.
Wouldn't vaccination passports have the same sort of value as drivers licenses? The point is that it is a document that certifies that you are lower risk for doing this activity in public spaces, and you are banned from doing these activities in public spaces unless you have the document certifying your lower, but non-zero, risk.
Do we apply the "choices have consequences" to any other medical condition? It doesn't get applied to obesity, smoking, drinking, DUI, even criminals get medicial care. But yet someone chooses not to get a vaccine and now "no treatment for you!".
That is not true in certain circumstances. For example, you won't be eligible for an organ transplant if you have disastrous personal lifestyle choices. Indeed, I would expect a long criminal history might even disqualify you on the grounds that you're terrible at self-discipline.
Good point. That is similar circumstances/conditions, though usually the timescale is a lot bigger (months or years) and the supply is much smaller. I am not 100% opposed to triage based on vaccination status IF the science shows it is a large factor for similar condition patients. But to me it sure seems like there are many other factors that we know right now contribute massively to having issues. So yes if two patients are equally obese and the same age and both have high blood pressure and one is vaccinated and the other isn't, well choose the one who was vaccinated. But I don't think you get to that level of granularity for triage normally.
Is that true? I believe that organ transplants prioritize people who are most likely to benefit from the transplant, but you aren't removed from the list because you have bad personal choices - you are just de-prioritized if you (whether because of personal choices or genetics or age or anything else) are unlikely to derive as much benefit from the new organ as someone else.
Kind of splitting hairs there. Yes, if you make "disastrous personal lifestyle choices" (my phrase) and they end up having no significant effect on your health -- you eat too much, but are somehow not obese, you drink but somehow avoid any harm to your liver, you shoot up but magically don't have hepatitis, you live on the streets in a cardboard box, but remarkably enough make all your appointments on time and have 5 or 6 friends who say they'll help take care of you -- then, yes, you wouldn't be disqualified.
Huh? It doesn't matter whether the "lifestyle choices" have a significant effect on your health - it matters whether your condition is going to be able to be significantly improved by an organ transplant. An obese person is often going to have much better prognosis than someone who made the lifestyle choice of being old. And living on the streets in a cardboard box (let's ignore the question of whether someone is *choosing* that state) doesn't obviously seem like it's going to harm your prognosis more than many other states.
Obesity, smoking, drinking, DUIs, etc cannot be instantly solved with a shot in the arm. If you could instantly switch to a healthy diet with just one shot, then sure, lets discriminate against the obese. All the things you've mentioned are complex problems that people spend years working on, not something you can fix at your closest pharmacy.
A "shot in the arm" does not instantly solve anything. It takes several weeks to start taking effect and up to a month or more for full immunity. People can lose quite a bit of weight in a similar timeframe. Also we have known obesity is a factor for a long time, so the obese have no excuse either. And of course a DUI is literally a choice to drive after drinking. To me it just sounds like you are making excuses.
My understanding is that obesity is quite stigmatized in the medical industry and obese people have trouble accessing treatment and being taken seriously, and doctors tend to attribute every problem to the patient's weight whether this is merited or not. Then again my understanding is also that obesity is not so much about choices either.
Also: we ration medical care in various ways, including by ability to pay. During triage doctors ration medical care based on who is most likely to receive the most benefit from the doctor's time. Rationing by vaccine status, if rationing does become necessary, seems morally better than flipping a coin.
Very much the case. I've been very fat. I've been not terribly fat. The rest of society has some very subtle snubbing and bias against fat people. My experience with doctors - including some bariatric doctors - is that they have some very overt bias against fat people.
Sounds like avoiding the vaccine isn't a choice either. It is the culture and climate they live in.
I am also not aware of any science that says that for those in a similar condition, the vaccine means they will have better outcomes. I 100% agree the vaccine results in better outcomes, but when someone needs the ICU, vaccine status may or may not matter at that point.
If obesity isn’t a choice, can we imagine anything as a choice? Putting food in ones mouth is quite intentional. And if it’s “base urges” driving it and not individual choice, what isn’t?
Calling an outcome a "choice" means that the outcome was a foreseeable consequence of an intentional action. If I intentionally eat two donuts every day knowing it will make me obese, I'm choosing obesity.
But what if I eat only meals recommended to me by some dietary expert - even if I don't much like them, and even if I'm unpleasantly hungry in between meals - and I still end up obese? Did I choose that?
Because that's what all the studies finding that various professionally-designed and administered diets fail to impact obesity are saying: the patient did all the "right" things and made all the "right" choices and still got a bad outcome.
I don't think we have any grounds to say the person should have known better if even our best dietary experts have no consensus on how to safely, reliably, and sustainably treat obesity.
A couple different angles can be invoked here:
For one, there is a trivially and universally stated and obvious relationship between calories eaten and obesity - eat less, lose weight. That satisfies the “knowledge beforehand” criteria you suggested, and it isn’t followed. If you observe obesity or overweight, which most do, you can eat less. Calories are not exactly an unknown, and calorie counting is a prominent habit.
The experts prescribe “eat less” alongside diets. And that isn’t followed.
The patient didn’t make the right choice - they could’ve just not eaten as much food. And yes also they should reduce processed food more fruit and beg and whatever, but eating less is the baseline.
Experts do have a consensus: put less food in mouth. But nobody actually does it, so they struggle for other ideas.
In the U.K. we recently had one of our first Incel terror attacks (if that’s how you want to call them), the killer was 22.
A few months ago (Feb) we also had a high profile case of a police officer kidnapping, raping and murdering a young woman that sparked a heated discussions about violence against women. The killer was 48.
What I find so interesting is that these two cases fit almost perfectly as examples of the bimodal distribution of mass killers, where younger mass killers have average age of 23 and older killers have average age of 41, but there aren’t that many 33 year old killers.
Some of this is discussed here ( https://www.psychologytoday.com/gb/blog/hive-mind/201710/mass-killings-evolutionary-perspective ) but I find it really frustrating that things like this are never discussed in the aftermaths. So I’m posting here in case anyone finds it interesting.
Perhaps tangential to what you're aiming to discuss, but to describe the recent UK murders as an "incel terror attack" feels like a media beat-up; to my knowledge, there's no kind of manifesto, and the pattern of the attacks wasn't in any way tied to anything about sex - it was his mother whom he claimed was abusive, followed by the first people he ran into on the street outside. Awful, certainly, but hardly a politically motivated terror attack.
'terrorism' has been a meaningless term for decades now, but I do think the incel movement (or parts of it) is uncommonly well-suited for pushing unstable people into these types of attacks. It centers on two primary narratives: your life is meaningless and hopeless and can never possibly be good and you might as well die, and the reason your life is so bad is because the world itself is awful and the people in it are evil in various ways and everyone is deserving of your contempt and hatred. That seems like a perfectly-calibrated narrative to push people into 'I should commit suicide, preferably while killing as many people as possible.'
Of course, we're never going to know whether the movement had any significant causal impact in this case, we as distant strangers on the internet can't know that level of detail about something that happened in the past and is this ambiguous. But it certainly wouldn't surprise me, and I do expect more and more situations like this to arise from people subjected to the movement.
It's also interesting that these folks never seem to get much public media sympathy. Islamic terrorists who travel internationally to commit murder get at least some sophistry about how they are oppressed in their native lands or some such thing and deserve our pity. Not so much for these folks.
Well, both the progressives and the traditionalists despise this demographic, so it's not that surprising. It'd take a saint or a heterodox contrarian to extend sympathy to them, who aren't exactly the mass media clout wielding types.
I tend to agree, I just picked both because of the relatively unhinged public/media response.
This is probably very much a solved/simple question, but for prediction markets, how should it be determined when a given prediction should be declared, if there is no simple algorithmic way to determine what happened without human testimony? (Think pretty much all political events outside of crypto and stock movements) Right now that sort of thing (like when it’s decided who won an election) seems to be decided by site admins, who are both dalliance and potentially quite corruptible by a determined billionare.
AFAIK the usual way these sorts of things work is that you spell out criteria clearly in the contract, and then if people disagree whether the contract was followed, they litigate.
Bond covenants are probably the closest conventional analogue I can think of
Perhaps they could be chosen from a reddit-style forum, where anyone can propose questions and then people (with enough money in their accounts, to prevent spam) vote on which ones to put up to market.
If we want to entirely get rid of site admins, you could probably automate this entire system on the blockchain. Find some mechanism to add a smart contract to the blockchain when there are enough wallets with enough value in them backing the contract, and then people can bet on the event in the contract, which automatically determines payout.
That's pretty similar to how forking markets like Augur work.
Augur seems somewhat dead sadly? At least aside from sports? And there seems to be a problem with token version upgrade or something.
Interesting…I feel like that would probably put relatively strong bounds on the sort of events you can bet on with full trust, as if the complexity of the contract gets too high, it becomes practically impossible to check through it yourself to determine it truly says what the poster claims it says
Accidentally posted that before I could fix typos—should have said “both fallible and potentially quite corruptible by a determined individual or group”
Can someone recommend a good active "analyzing fiction books I'm reading" blog / newsletter / substack / whatever? I run one at http://dreicafe.com but I've been skewing towards satire more than literary analysis, and I'd like to siphon the energies from someone good at the latter because I love such stuff. When I studied creative writing in ages past, they made us keep a 'reading journal' which is a phenom way to process books; reading other people's was super valuable. A lot to be said for seriously mulling on a novel after consuming it. There must be a few blogs like this around?
If you mean analyzing from the perspective of a writer--trying to figure out how they work, or how fiction in general works--try https://www.fimfiction.net/blog/331277/bad-horse-blog-index
I reviewed all the books I read for like ... a year? On https://portfolio.matthewtalamini.com/category/review/ no pretense to serious literary analysis, though.
Posted this for those interested in crypto/defi.
https://iz2020.substack.com/p/delta-neutral-positions-on-mirror
Who is the most interesting person from history who is not a household name?
Admiral Yi Sun-sin is one of my favorite historical figures, and I wouldn't call him a household name (in the USA). Greatest Admiral to ever live
Moe Berg
Although familiar to at least a few here, Gurdjieff. Irrespective of his teaching most people who met him thought he was the most extraordinary human being they'd come across. And to me, that extraordinariness is interesting.
I mean he was just a new age cult conman imo. Which is quite neat in the same way the Scientology guy is
Check out _My Journey with a Mystic_ by Fritz Peters. For some reason Frtitz Peters' family packed him off from NYC (by ocean liner in those days) to Gurdjieff's school in France when he was eleven. Peters says he wasn't consulted in his parents' decision. They just sent him without explanation. It's not clear if Gurdjieff knew he was coming, but Gurdjieff agreed take care of him. He gave Peters the single chore of mowing the lawn to "pay" for his room and board, with the stipulation that he promise to mow it once a week no matter what (and there's a good tail there). Gurdjieff became Peters' mentor and caregiver. He may have been charlatan, but he was a good-hearted charlatan.
Henry Nyquist
Brilliant anecdote “ “… Workers with the most patents often shared lunch or breakfast with a Bell Labs electrical engineer named Harry Nyquist. It wasn’t the case that Nyquist gave them specific ideas. Rather, as one scientist recalled, ‘he drew people out, got them thinking.” (Pg. 135)”
Yeah, A genius, interested in what the other geniuses are doing. He's the person I would like to have lunch with. (I'm no genius.)
Depends a lot on your household I think.
For some reason two scary ones immediately came to mind...
Countess Elizabeth Báthory de Ecsed
Baron Roman Fyodorovich von Ungern-Sternberg
Countess Bathory I already knew, the Baron I did not. I was interested to learn that he came from Graz in Styria (Austria), as Sheridan Le Fanu's "Carmilla" is set there, and many Gothic stories were also set in that vicinity for the usual vampire, werewolf and cursed haunted castles storylines.
It's also the origin place of a character in the short story "Dracula's Guest" by Bram Stoker: https://en.wikipedia.org/wiki/Dracula%27s_Guest
There's just something about the place, I suppose?
I am definitely on some sort of list after googling those two
Does anyone here happen to be in PR, journalism, or anything relating to media relations? I would like to become more effective at reaching out to media/pitching stories more effectively, for both EA and personal reasons. If anyone prefers to talk privately about this rather than in the thread, I can be reached at yitzilitt (at) gmail (dot) com.
Just out of curiosity how many people on ACT think with words? I only think with words when I have to write something, or when I rehearse a speech, or when I replay a conversation in my head. Otherwise, I seem to making decisions and figuring things out without words. In fact when I speak to people, unless I'm carefully rehearsing what I'm going to say, I don't think the words before I vocalize them.
Bonus question. Subjectively, do you ever try to pin down where in your brain your internal speech is coming from? For me when I silently talk to myself, the talking seems to happening about where my premotor cortex is located — above and behind Broca's Area, which is the part of the brain associated with language. However when I vocalize words, they subjectively seem to be coming out of Wernicke's Area midpoint between my ears (which happens to the area associated with speech processing).
NB: After about ten years of doing Mahayana-style mindfulness meditation (including he basics of Dzogchen Rigpa meditation, which I never really mastered), I not only got used to observing my thoughts and feelings as they arose and passed away, but I became of aware of where in my head they subjectively arising from.
https://www.psychologytoday.com/us/blog/pristine-inner-experience/201111/thinking-without-words
I almost always think with words, and have no sense of where in my brain things occur.
When I'm consciously thinking about a specific topic, I usually have a stream of words as if I were explaining my thoughts to another person. Often I actually vocalize these words. Although sometimes I have a sort of a placeholder where I'm referring to some complex thing that I haven't yet given a compact name to, and this creates a sort of skip in the wordflow where that name would be if I had one.
I feel like this habit has contributed to me being good at explaining technical topics to other people.
Same, I generally have internal dialogue rather than internal monologue - and also same in that I'm pretty good at explaining technical things to people (with a particular specialism in cross discipline explanations, i.e. explaining number things to words people and vice versa)
I'm very surprised that you have a spatial sense of where in your head a thought is coming from! To me, if there is any location, it seems to be about the midpoint between the eyes and ears, since those are the sensory organs I most identify with. But I certainly can't differentiate from this being an illusion caused by over-reliance on those sensory organs vs this being a veridical representation of the neurons that are causing the sensation. The fact that I have on so many occasions learned that tactile sensations are being driven by a different location than the apparent location on my body makes me very suspicious of attempts to introspect a location. But I do think that I can get better at these things, and I have gotten better at locating bodily sensations as I do yoga - though that is primarily because of feedback from very precise bodily movements, which I don't think I have for internal thoughts.
I believe that I mostly don't think in words, especially when it's about something math-related.
I sometimes monologue in my head. But most of my thinking is just concepts.
My thought processes are largely just concurrent streams of words going on. I can hold concepts in my head, but the way I process them is via narrative. (think of the blind mice describing the elephant).
That being said, I've got basically no visualization capacity and don't dream, so I'm not sure how normal I am on this spectrum.
Suppose that you (the general "you", not the OP specifically) are driving, and see a potential hazard up ahead. Perhaps a cyclist, that you will shortly want to overtake. Does the word "cyclist" pop into your head? And such phrases as "oncoming traffic", "near-side parked cars", "pedestrian who looks like they might cross the road but hasn't looked in my direction yet", and so on? They don't for me, and when I've been asked during advanced driver training to give the instructor a running commentary on that sort of thing, I've found it impossible. The effort of grabbing all the words for the things I'm seeing and doing is too much of a distraction from the task of making progress while not killing anyone.
I'm a relatively inexperienced driver who was encouraged to start narrating when I was just learning, so that's a confounder, but yes I do generally think about the things around my car in words. It typically sounds a bit like, "Car, car, BIG car woah, pedestrian over there, car, car, use your blinker asshole, car door opening?, car, bike behind me, car..." (Narrating *out loud* during driving is still hard for me, though—it takes more brainpower to move my mouth than to think words in my head.)
Interesting. Normally I think in words, but not so much for driving decisions or object recognition.
Great example.
I think in words all the time, pretty much. Even my dreams are usually narrated and sometimes words-only (i.e. no images/sounds, just thinking words).
I continue to be surprised by how different the subjective experiences of different people can be. I've never ever had a narrated dream. But maybe you're just calling experiences "dreams" that I don't. I have fallen into dreams while thinking in words, with my words getting crazier as I fall deeper into sleep, which might be the same experience you're talking about. But I don't remember any stories or fictions which I dream that I'm participating in being narrated.
I usually perceive myself as thinking in words when actively thinking about something - as opposed to daydreaming, which is mostly imagined senses - but plenty of concepts feel like discrete things in my mind but don't explicitly have words and are genuinely difficult to put into words.
One oddity that might be of interest - when I am thinking in words, it's in a way that is akin to reading or writing, not akin to speaking or hearing. This is probably because I read a *lot* and don't have that much social interaction, relatively speaking. I distinctly recall as a child I dreamed in text!, though at some point as a teenager I started watching movies and playing games more and started to have dreams with visuals and sound. As best I can tell, my imagination can cobble together new composites, but only from discrete sense data that I have actually experienced - fiction makes it far easier to have experienced fantastical visuals than the associated smells and tactile sensations.
This sounds similar to my experience - that only very active thought is verbal, and that verbal thought is as much written as it is spoken.
Great question, because so much of Western philosophy is based on Plato's presumption that all thought and reason is in words.
I nearly always think, then hear a voice in my head translating my thoughts into words. But I definitely do the thinking first--the idea is fully-formed before the sentences are, as proved, for instance, by the frequency with which I stop, unable to remember the word for a thing despite knowing full well what the thing is--its meaning, shape, appearance, and function.
I've often wished I could think without then phrasing it in words. It would save a great deal of time. But I think casting it into words forces me to put it into a more-logical form, which sometimes reveals holes in my pre-linguistic logic. Tho I can't cite any examples.
Mathematically, I can't think in words, because math is (I think) more powerful than words. I may visualize distributions and graphs in my head, or not visualize them at all, yet intuitively understand what characteristics a phenomenon produced by a particular distribution will have, or what shape the composition of two functions will have, or how it will behave at its extremes, or whether a function's surface will be monotonic, non-monotonic but smooth, or discontinuous,
I'd say I think about 85% in words (mostly the sound of my own voice in my head), 10% spatial reasoning (in particular, I solve equations by rearranging symbols in imagined space and often remember lists and procedures spatially), 5% the odd visual concept, other person's voice in my head, musical idea, etc.
I was going to say that my internal voice comes mostly from the middle of my head, but maybe slightly to the left, before I noticed that I'm sitting with my right ear close to a wall and tried turning my head. Turns out my inner voice actually seems to shift a bit to be coming from where most of the sounds in the room would be coming from. I can also move my inner voice to pretty much any location I want, including outside my skull (farther away = less precise location, though).
I seem to think in words, but I don't think I actually do. When I try to say or write down what I'm thinking I have to stop and figure out how to say it.
I don't see how that counts as evidence? I'm quite sure I think in words, but something about talking out loud or writing seems to reduce my fluency a bit, like... it's just easier to think in my head somehow.
(edit: although I think with words, I think there's also underlying thought going on without words, and that the words just... help, somehow... but I can still have an idea with no corresponding word, which can occupy a silent spot in a mental sentence.)
I don’t think thinking with words is a real thing. It’s more of a a “is the Father or Son or Holy Spirit greater, and are they one, divided, one and divided” sort of question or a discussion over whether fish is a category or a class or a type or a fuzzy boundary. Nobody thinks in words, some people just larp with words while they think.
I have to think in words as I compose this answer to your statement. But I didn't think in words to determine what sort of answer I would give your statement. The words happened after I decided to give you my personal dichotomy example of non-word / word thinking in action.
I think larp might be the wrong verb. Thinking in words makes sense to me as an interface between thought processes and conscious experience. In order to process thoughts, you have to compress and convert them into a format that consciousness can read, whether that's words or pictures or feelings.
Strictly conjecture, of course, but it makes more sense to me if it's a process with a purpose, rather than something self-gratifying.
What’s this distinction between “thought processes” and “thought formats” and “conscious experience” and “feelings”? Why is it a thing?
I expect the actual mechanisms that produce thought are very intricate- a lot of things happening in parallel, a lot of chaos and uncertainty, and a lot of important parts happening at the individual neuron level. That process would be illegible to me, it's got orders of magnitude more parts than the most complex systems I understand.
Whatever I perceive my thought process to be is an abstraction. I think in a self-restructuring network of electric pulses and neurotransmitters. I don't experience anything like that, I experience words. Before a thought enters my conscious experience, it gets translated into words, likely using a lot of processing power. When my executive functions manipulate thoughts, they do so at the word level. So words appear to be a kind of interface between high-level and low-level processes.
For some people, the translation doesn't go to words, it goes to visuals. Those people probably have the same process for producing thought, but the high-level component is in a different format. Often, my thoughts skip translation into anything informational and just output a feeling, especially if that feeling is fear.
So my 'thought processes' are a complex, subconscious system. My 'thought formats' are the encoding used to convey those thoughts to my higher-level functions. My 'conscious experience' exists somewhere above the encoding level and information is routed through it for whatever reason. And my 'feelings' are a particular component of my total conscious experience.
My apologies if I'm not addressing your question correctly. And, again, this is all very speculative, it's just the model that I'm working with. Really, my point of contention is just that it makes more sense if I experience thoughts as words *towards an end* and the system I've described is just a hypothetical that follows from that assumption.
This sounds close to right to me.
I think in words automatically, but not exclusively. I've noticed that my nonverbal thoughts arise earlier than their verbalizations. The difference in speed is remarkable: I can form a thought in a fraction of a second, but the verbalization can take several seconds to catch up. I can cut it short and leave the thought unverbalized, but it takes effort and focus, like trying to avoid blinking, but even more difficult.
You being aware of where your thoughts are coming from seems highly dubious. The brain, physically, does not have senses. It does not have touch or pain receptors and it absolutely does not have proprioception. There are no neurons that encode the information "this is my own position relative to all the other neurons".
However, the brain does create the *experience* of your consciousness being located in your head. You feel like 'you' are a perceiving entity separate from your body, with the perceiving happening in your head. While it does happen there, the experience is illusory. When people have an out-of-body experience, they tend to perceive themselves as floating above their own body. I believe this is the result of a shift in this illusion of being inside your own head. The origin of your sensations never changes, but the feeling of it can.
So, when you say that you perceive where in your head sensations are arising from, I don't think you're tapping into any neural location data. However, I think it's plausible that you can perceive that different thoughts have different 'signatures' that indicate which brain area they originated from. Just like a face has its own qualia, a 'Wernicke thought' might also. This qualia may or may not be interacting with your thought-location experience, based on your knowledge of what brain areas are likely to be involved.
Is it possible for you to alter your perception of where your thoughts are coming from? Can you attempt to shift it towards something like an out-of-body experience?
As for proprioception, I wouldn't say this skill (or delusion) is related to the body's proprioception systems. It's more of an impression. Like I said, this may be like a phantom limb — except that I don't intuitively believe that that's the explanation because it happened as a side effect of meditation and observing my thoughts.
I will totally admit that when I say I can locate my brain processes internally in my skull is highly dubious, indeed! But it's something that I picked up as a side effect of meditation — actually it came on slowly over ten years of regular meditation of observing my thoughts arise and dissipate — and I'm not sure anyone who hasn't spent years of meditating could do this. Likewise, I've never discussed it with other meditators, so I don't know if anyone else has noticed this. But I thought I'd put the question out there, to see if anyone else has had this experience. I'm perfectly willing to admit that it may be similar the phantom limb phenomenon.
And, yes, I've tried to make myself feel that these processes are happening in other locations of my skull, but with no effect. So, if the original "perception" was due to autosuggestion it's damn hard to autosuggest myself out of that belief! Likewise, I can't shift my perception of "me" and where my thoughts are arising outside of my own cranium — e.g. for example, I can't move them down into my abdominal cavity. Qualia are somewhat different though. Feeling is at the point of where I'm touching something. Taste is in the mouth. Hearing is inside my skull (but that's where my eardrums and cochlea are). Vision seems to be right behind my eyes, though. I've tried to make myself visualize an out of body experiences, but I've never been able to accomplish that way of perceiving my selfhood. The thing that I identify as my identity, the "I" that is the watcher in the Plato's cave, seems to be located midpoint on the line between the front and top of my ears, just below where I imagine that my internal-speech process is running.
Very interesting. Thanks for sharing your experience!
> There are no neurons that encode the information "this is my own position relative to all the other neurons".
I don’t see the relevance. Computers don’t have these either and still have strace and can read and analyze their own code. And while that’s ... less of an analogy and more of an unrelated system, it shows you don’t need whatever that kind of neuron is to do whatever a “think about thinking” is
Computers can analyze their own code, but they don't have a sense of where in their memory chips that code is located. And that's more than an analogy; it's the very same phenomenon, just using electric rather than chemical signals, and different representations.
Er, they have an /address/, but they don't know where, /physically/, the word "at" that address is stored.
They totally do! In the sense that it is possible to write code that can figure out relative physical locations of memory addresses ("Row Hammer" is one example).
You're right, but I didn't say that neurons by definition cannot do this. They just don't. We can think about thinking because it's advantageous to do so; there is literally no reason for our brain to expend any energy on thinking about where neurons are located relative to each other.
For the most part, I don't know how to think without words. I try sometimes, but broadly speaking my internal monologue just keeps chugging away as usual.
(I honestly have no idea what you mean by the second question. How do you have any kind of spacial awareness for the inside of your brain?)
It may be pure delusional thinking on my part, but after sitting in meditation and getting used to watching my thoughts arise (words, images, your surrounding qualia), after a while it seemed to me that certain processes were located in certain places inside my skull. I guess my counter question would be, have you ever tried to develop a spatial awareness of your mind and its functions?
It seems to me that most people just let their consciousness do what it does with out trying to examine it as it happens. The Buddhists are interested in observing the process as it happens, but there's nothing in Buddhism that says you can't observe *where* the process is happening...
Just chiming in to make N=2: I also either have a spatial sense of my mental processes within my brain or else a delusion that I do.
It makes a little sense that we would have sensory input relating to blood flow in the brain, like an fmri, just because we can sense things like hypertension anyway. But I'm less confident, because I can't see an adaptive advantage. I'm also not blinded, as I knew generally where things were supposed to be before I made those observations. A quick experiment:
where do you feel your visual imagination exists? (I may have been linked to the correct answer from here, so here's hoping you haven't seen the same thing)
One thing that's definitely not placebo is the 'third eye' sensation of pressure behind the center of the forehead. I experience it very intensely while meditating. I understand there is a good chance that it's gland behavior, but it at least adds credence to the idea of a sense input that doesn't apparently produce any advantage.
> where do you feel your visual imagination exists? (I may have been linked to the correct answer from here, so here's hoping you haven't seen the same thing)...
I'll answer you question with a story. For a while there I was studying under a Nyingma instructor. She was big on giving us guided meditation on visualizing images. For instance we were supposed to imagine Avalokiteshvara sitting on a lotus with four arms, one arm holding the lotus flower, one arm holding up a mala, and the two other arms with their palms in a prayer pose. She'd give us his/its/her clothing to visualize and the its crown, and the colors, and so on. These guided meditations would take half an hour at least. Then after we had constructed our bodhisattva image, we were supposed to try to hold those images "in front of us" she said.
Anyway, I just couldn't do it. I kept trying to imagine Avalokiteshvara in front of me, but I couldn't keep the image "together". I'd lose it quickly. I'd try to reconstruct it to catch up with her narration, but that just made everything worse. I asked her for advice, but she just told me to keep practicing. It was one of the most frustrating meditative experiences I've ever had. I left her group and moved on to a Gelug group that just practiced mindfulness meditation.
Some years later I decided to try these visualization practices on my own. I had seen enough images of Avalokiteshvara to know what he/it/she looked like. So I tried to visual he/it/she in front of me like my Nyingma instructor had instructed. No luck. But then, for some reason, I started to try to imagine Avalokiteshvara behind me, so I was sitting at its feet. Very quickly I was able to construct the image! The I realize, that I was really imagining it as sitting in the back of my brain. Well, guess where the visual cortex is? Well, you know the answer.
Oh my. I wasn't going to mention the third eye (!) -- just because thought people may have had a hard enough time dealing with the spaciality of thoughts and qualia in the mind. But I definitely experience the third eye. For me it's a dim internal "light source" right where you described it. It's always there even when I'm not meditating. But once I close my physical eyes and take notice of it, it's like an area of low phosphene activity (similar to what I get with my eyes closed). If go off on meditative tangent and observe the third eye's phosphene patterns they will become more pronounced. Lots of metallic blues and indigos. And the longer I concentrate on it the faster the patterns appear and approach me. Almost as if they're coming out of a tunnel at high speed. I have a friend who just started meditating, and, although she says she never noticed it before she started meditating, she's develop an even more pronounced perception of the third eye than I have (colorful shapes, she says). And she's found hard to meditate with the third eye glaring down on her. As for me, I don't get sucked into it unless I make the effort.
I haven't. Maybe I'll try that. Though I don't currently see how such a thing is possible, biologically/neurologically. Like, my body has sensors and by brain has corresponding regions for tracking where different parts of my body are, so I get spatial awareness of my body that way. But is such a thing true for my thoughts and my brain itself? How/why? (I'm not trying to accuse you of being delusional, just that it doesn't track with my current understanding of how my brain and body work.)
Start by trying to imagine where your your sense of self identity (your "I") is in your brain. Is it up front over the eyes or behind the eyes? Is it at the base of your skull where the spine connects? Is it way in back? Is it top center? Don't over-think it. If it doesn't come immediately to you, revisit the question over the next few days and the coming weeks. Try to remember to try this exercise when you're talking to someone or when you're out physically exercising. See what you find.
To elaborate (in what would be an edit if Substack had an edit button): Words are such a fundamental part of my subjective conscious experience that, when people tell me that they think wordlessly all or most of the time, my (incorrect) instinctive gut response is that they are mistaken, because I can't imagine what that would *be like*. I can imagine thinking wordlessly some of the time — I'm pretty sure I've done it (though it's impossible to notice while I'm doing it, because that would be wordful) — but I can't imagine how a human could think like that all or most of the time.
(To be clear, my head isn't bereft of non-word cognition — it's full of wordless feelings and emotions and sometimes pictures too. It's just that there are very rarely *no* words in the mix.)
What Is It Like To Be A Human Who Thinks Without Words?
I agree with you on the difficulty of noticing that you are thinking wordlessly. I find that my cognition elaborates itself into words when I focus it on itself, and this process is usually to fast and unconscious to notice.
The reason I'm sure that this does happen to me is that I can get myself into a state where my internal monologue is in a language I'm less fluent in, which causes the convert-thoughts-into-words-for-metacognition process to be more difficult and slow down enough to be observable.
I like your multilingual example! Very cool!
Like Helen Keller before she learned touch sign?
I always think with words. Sometimes pictures too. The language of the words varies with topics and context.
There will surely be an Afghanistan thread on this politics-allowed open thread, so I'll start one.
My only-slightly-controversial view: we should try to work with the Taliban. If they view themselves as good men and followers of Allah, presumably they won't engage in Khmer Rouge-style atrocities. If they don't do that, Iran and Pakistan will start fueling a resistance that isn't tainted with American imperialism.
What do you think y'all should work with them on?
Setting up a public health system and infrastructure for domestic and international economic activity and interaction?
The Taliban are bad Islamist theocrats, unlike the good Islamist theocrats in Saudi Arabia, so this would obviously be unacceptable.
The bad outcome here isn't Khmer Rouge-style atrocities, it's Taliban 1996–2001-style atrocities. Which Pakistan was fine with.
The Khmer Rouge viewed themselves as good men and followers of reason. Look at what they do, not what they say or believe, and certainly not what they say that they believe. We've got plenty of evidence as to what the Taliban do when they think they can get away with it.
Also, as WayUpstate points out, Pakistan *already* fueled a resistance untainted by American imperialism. Which is the Taliban.
Pakistan is an agglomeration of Balochistan, Sindh, Punjabistan, and "frontier areas" populated by Pashtuns. If the situation in Afghanistan threatens populated cities, the government of Pakistan will not give idle acquiescence.
I think you overlooked the fact that the Taliban are a Pakistani-created force and continue to be supported by the Pakistan intelligence organization. Should we work with the Taliban? Only as far as they demonstrate their ability to follow through on any agreement and receive any subsequent punishment for not doing so. I would definitely start from a position of "we doubt the veracity of any of your statements but will see your actions as proof of your ability to build confidence in your ability to govern or make agreements that you have the ability to execute."
I'm not over-looking the Taliban's Pakistani collections at all.
The situation is "trust but verify". Tit-for-tat. And the rational answer is to start from a position of "we trust you, and we know you will be destroyed if our trust is in error".
I’m not sure that is the rational “starting point”, in that we are not at the start of things; the Taliban has a long history to consider. It’s not like they are a “blank slate” regime with no prior history to evaluate.
I think leaving a path for the moderate margin of the group to have some wins and therefore gain popularity is a fairly generalizable strategy for this kind of situation (though the US political system seems to have trouble supporting wins for any part of an “enemy”, cf the moderated in Iran) and so while I would not say that we should trust the Taliban, I think we should try to work with them so that we have a chance to steer their development.
I think the main constraint to consider is to what extent domestic US politics constrains the government's foreign policy choices. It’s not like the government is free to pick any rational policy action they want, free from political concerns. So I think the main things to watch will be whether the Al Qaida-offshoots move back under the Taliban’s wing, and how strongly they regress on human rights. It will be hard to get political support for cooperation from the US if they are actively harboring terrorist groups or executing “infidels” left and right.
Alex, the other evening an Afghan-Australian biophysicist Dr Nouria Salehi was discussing what her contacts in Afghanistan are saying. Her (their) take is that finally there might not be corruption on every corner, and a possibility for an Afghani-led reform now the Taliban are in power. The US-centred corruption the last 20 years did serious damage to the country, so its hard to see how could be worse under a nationalist government (I may have to eat those words but lets see). Apologies to US readers, but there are scant examples of US occupation that benefits an underdeveloped nation without a strong existing national government.
As akarlin said, the loss of 40%GDP worth of subsidies will hurt, and they have to pay civil servants and workers and such to run all the tasks of government, from city to provincial to central. They won’t necessarily succeed!
The "state" of Afghanistan is a post-colonial construct, and it is true that revenue to fund the centralised state may be beyond the Taliban. Although they will have a number of willing donors - money for influence - they may well decide to try to recreate the heavily local nature of "old" Afghanistan. The advantage of that would be that local revenue would support local administration. But the middle class of Kabul wouldn't like that.
I agree that it's hard to imagine any government being more corrupt than the Ghani regime.
I'm not sure that is of any benefit to either the Taliban or the abstract concept of good governance, though.
I agree that corruption will probably drop, but it seems like there is a decent risk of a lot of state-sponsored murder. But if that doesn't happen, or if it happens in a burst and then largely subsides, I could see Afghanistan's outlook improving from there.
I sort of think that US occupation trapped Afghanistan in a low equilibrium, and now that it's over they have the chance to aim for a higher equilibrium, but they could also just go into a free fall.
It's far from certain that the Taliban will be able to hang on to power. The Afghan budget was 75% dependent on foreign grants, and that has just gone up in smoke. Foreign currency reserves are frozen and the country de facto cut off from the world economy. This is a situation that would test the most ingenious economists and policymakers; we are talking about the Taliban. Last time the Taliban ran a Central Bank, its governor stopped declared the currency worthless, stopped the issue of new notes, and spent more time on the battlefield than in his office. I wrote about this here: https://www.unz.com/akarlin/where-are-the-afghanis/
Meanwhile, the Northern Alliance has also reconstituted itself. Hard as it is to believe, but not all might be lost yet. In any case, recognizing and dealing with the Taliban is way premature right now.
We'll see if they keep it frozen forever. I suspect it's part of the Biden Administration's negotiations with the Taliban to keep them from attacking the airport until the US can get its Afghan allies out of Kabul.
Did the Taliban have great reserves of foreign currency from 1996-2001?
I am salivating at the thought of seeing Taliban governance and administration in action. It’s gonna be so fun to watch.
"a situation that would test the most ingenious economists and policymakers" - yet what you describe is what most countries before 1900 considered normal.
American policy-makers have suffered for decades from the delusion that money alone can solve all their problems. They have wasted literally over $1 trillion USD in service of that lie. It didn't work.
So long as Afghanistan is self-sufficient for food, and it has a government that discourages foreign trade, it literally has no need for "foreign currency reserves".
5M Kabulis might beg to differ in a few months after the state ceases paying wages and hyperinflation eats their savings. Of course, if the Taliban survives for a few years, the issue will become moot, as widespread de-urbanization occurs. The problems won't be trivial, though. The Afghan population has tripled to quadrupled since the 1980s, there will be a huge refugee crisis.
Good use case for Bitcoin as an alternative store of value - less risk that people in places like this have their savings die or access to their funds denied. Highly subjective opinion of course, but that's my belief.
https://www.cnbc.com/2021/08/21/bitcoin-afghanistan-cryptocurrency-taliban-capital-flight.html
"Chainalysis' 2021 Global Crypto Adoption Index gives Afghanistan a rank of 20 out of the 154 countries it evaluated in terms of overall crypto adoption."
Notable tweet mentioned in the article: https://twitter.com/janeygak/status/1412985931167064064?s=21
Is there sufficiently widespread internet/electricity in Afghanistan for this to be viable? And is it robust enough to survive the likely turmoil that awaits the country?
I'm much more skeptical about Bitcoin than I used to be, but can you can cryptocurrencies just fine in a place without a lot of electricity for mining, as long as there are people elsewhere in the world working to keep the chain secure.
> Access to electricity (% of population) in Afghanistan was reported at 97.7 % in 2019
> There were 7.65 million internet users in Afghanistan in January 2020. The number of internet users in Afghanistan increased by 366 thousand (+5.0%) between 2019 and 2020. Internet penetration in Afghanistan stood at 20% in January 2020.Feb 17, 2020
50% mobile phone penetration it seems
And I dunno about if it’ll survive taliban. Probably lots of reports about that available. The loss of foreign aid will hurt
> “As is always the case, the IMF is guided by the views of the international community,” IMF spokesperson Gerry Rice said in a statement Wednesday. “There is currently a lack of clarity within the international community regarding recognition of a government in Afghanistan, as a consequence of which the country cannot access the Special Drawing Rights (SDRs) or other IMF resources.”
Hilariously phrased.
> 75 percent of public spending funded by grants.
The Afghan central bank’s tenbillion USD of assets have also been frozen.
> profits from the mining sector earned the Taliban approximately $464 million” in 2020.
So their economy is ... probably screwed. You can see why bitcoin probably wouldn’t help and an id it could the US government would probably just seize it and arrest anyone who traded for it.
or more accurately an Afghan micropayments app that can support rapid and frequent transfers of sums of $.1 USD and below. Honestly I do expect at least one attempt.
Yuno L2?
Haha NO. If you're of the persuasion of https://twitter.com/ptonerdoteth/status/1427217060078227462 I have nothing to say to you; the community shall indict you on its own.
If you're of the "Bitcoin would have let Ashraf Ghani loot the country of money more efficiently" persuasion, I also have nothing to say to you.
I disagree with your economic theories, but that's not the main issue. Why would the Taliban cause de-urbanization? They aren't powerful enough a government to cause that; at least not without Iran, Pakistan, Tajikistan, etc. supporting rebel movements.
The Taliban do not have the fiscal capacity to support a state at the scale it was at when foreigners were pumping in subsidies equivalent to 40% of Afghan GDP. When state workers in the cities stop getting wages, they'll need to move out into the country (or abroad) to survive. Service industries that catered to them will also have to radically downsize.
There is a precedent for this. Kabul had 1.5M people in the late 1980s, by 1996 it had collapsed to 0.5M. Certainly I don't expect it to be as bad this time, but I don't see how it can not happen in the medium-term.
Opium and minerals with high extraction costs
If you don't consider "38 million souls" a thing worth having, that's on you.
I kind of do, but that logic justifies long-term colonial occupation of a rather large fraction of the world - it's special pleading to deploy it only about Afghanistan.
Lots of minerals. Other than that, I don't know.
What are some good examples of organizations learning from failure, and doing things better the next time around?
Two examples:
- I've recently been playing Skyward Sword. It's, well, disappointing, but I'm impressed by the degree to which Nintendo managed to notice the frustrating things about it (e.g. the lack of open-world feeling) and successfully turn them around by Breath of the Wild (while also successfully keeping/improving a lot of the things that did work in skyward sword).
- the US navy in WW2 seems to have consistently done this pretty well - e.g. they introduced damage control measures in their carriers after the battle of the Coral Sea that may have helped them in Midway (only one month later!)
What are some others?
You could say that the founding fathers of the US did, with the initial attempt of the Articles of Confederation, followed by the constitution
Which of course naturally raises the question of whether, now that we're no longer in the initial post-colonial phase, we should have considered a third or fourth iteration of the governance system, maybe starting in the early 1820s when the party system was falling into crisis and re-forming.
Yes--which raises the question: Now that we're not at war with Britain any more (which IIRC was the main reason for that change--we couldn't build a military, nor pay the soldiers what we already owed them, under the Articles), would we be better off going back to the Articles of Confederation?
No. One of the other issues is that under the Articles of Confederation, there was no way for a central negotiator to bind all the constituent States to treaties (such as tariff waivers).
I mean we couldn’t pay soldiers because we couldn’t collect taxes, and we also had issues with enforcing laws. I suppose if you felt just disbanding the US into 50 different nations was good it would make sense to go back, but otherwise I would say no
The US Army has probably put more effort into learning from failure than any other organization. It's a regular part of everything they do. I don't know whether they've aggregated any of this information anywhere that's publicly available. I can say that, anecdotally, learning from failure has great benefits, but also tends to lead to being prepared for the previous war.
If the army has made learning from failure an ingrained automatic habit, then being prepared for the last war doesn't matter if the next war lasts longer than a few days.
that depends on just how frequently one is at war; maybe the USA intervenes globally often enough to get away with this (though I think it does conspicuously have this failure mode*), but for states that have managed decades of peace, you run into issues that take years to resolve, and you can't afford years against a peer military. Hell, you can't necessarily afford days in a modern conflict.
*things like guns that don't work well in desert conditions, or frankly the big issue of how to fight a non-state actor, which after decades in the ME they never figured out.
While I'm sure they learned a lot, I think the main reason they did better at the end was simply economic inertia: the Confederacy could not replace causalities and re-supply their troops at the same rate the Union could due to underlying economic factors. Sure, they learned to put aggressive generals like Sherman and Grant in charge, but Grant's march towards Richmond had the Union losing ~55,000 men to the Confederacy's ~33,000. The Union could afford to lose 55,000, the Confederacy could not.
Right, but see Grant *knew* that and used it effectively to win the war, which McClellan didn't. As they say, successful generals study logistics. Economic and manpower advantages are only enabling advantages, they only turn into military success if they are used well. It's perfectly possible to lose a war despite having a huge advantage in manpower, cf. the Treaty of Brest-Litovsk.
If you live in a city where hospital beds and ICUs are full, what exactly happens to you if you need hospital care? Covid or non-covid, both scenarios. I'm trying to understand this for Austin, Texas. Quite scary to think about.
Moving less urgent people from ICU to regular was mentioned. But in general, when system get more stressed you wait longer. Sometimes long enough to die before receiving required medical care.
In Poland ambulances were waiting in queues in front of hospitals, sometimes for several hours.
AFAIK noone tried to check/estimate so far how many people died as a result.
ICU is a designation, not a physical bed design. To some extent beds can be made ICU or not ICU. As the name implies the difference is mostly how intensive the care you get is, which is a staffing issue. The same is true for beds. The number of "beds" available does not refer to actual, physical beds but rather staffed beds capable of providing some level of care.
The fullness of ICU can be quite elastic as a result. If you look at graphs from various hospital systems you can see that often the number of beds and ICU positions goes up by a lot within the span of a couple of weeks. Moreover hospitals try and keep ICU as full as possible because otherwise you're wasting valuable staff who are just hanging around waiting for patients to arrive, instead of spreading themselves out over less intensive beds.
However the media almost never explains this, because stories about ICU being "full" are guaranteed drivers of clicks. You can find stories about overflowing hospitals in flu season for many years pre-dating 2020.
ICU "designation" also involves certain things like staffing ratios, typically 2 beds per nurse.
Getting more ICU beds built is a lot easier than getting more ICU beds staffed.
Quite possibly they move someone who is in an ICU bed that they don't need, into an ordinary hospital bed. "Need" is a rather fuzzy term here, and so long as ICU beds exist it is in everyone's(*) interest for them to be mostly-full with people who sort-of-need them. At some point, you reach a level where literally every bed is occupied by someone who will literally die if they are not in a literal ICU bed. But until you get close to that point, it's hard to know how much slack is left in the system because nobody is really motivated to track degree of "need" for ICU patients.
Once you do reach that point, the pile of dead bodies will tell you pretty clearly that you should have done something about it last week.
* Except maybe the people who are paying the bills, but in the US health care system they don't get to make any of the relevant decisions.
I witnessed Delta for this to India. It was horrifying. You'd think we would be better prepared here by now. :(
Why? Virtually nobody is willing to pay for unneeded capacity, and rationing by willingness to pay is Anathema.
Was it really such a hard problem to anticipate this explosion of serious cases based on what Delta did in other countries? Maybe it was.
I think India was the only country known to have a delta outbreak before the major delta outbreaks in Europe and North America got going. So there wasn't quite enough evidence that delta was qualitatively different, until it was already happening. Of course, the US delta outbreak was clearly different from prior outbreaks by mid July, so there could have been time for some reaction.
My understanding is that delta has led to relatively fewer serious cases than previous variants, at least in the US.
I don't think this has been clear. There have been fewer hospitalizations and deaths recently than there have been at this point in previous waves, but most of this has been attributed to the high vaccination rate among older people, even in areas with overall low vaccination rates. There has been no consensus on whether delta itself hits individuals harder or less hard than previous variants.
Are hospital beds completely full anywhere? If ICU beds are full they just have you in a hospital bed. And I imagine if all "beds" are full there is some amount of extra beds they can give you. I don't think the count of "beds" is actually the physical number of beds but rather normal operating capacity. But yes at some point the system gets overloaded and you have to triage. Also it seems like in a lot of places the real limitation is staffing, not hospital capacity.
We've got feild hospitals set up in the parking garages of the major hospitals in Jackson, MS right now. last time that happened was Hurricane Katrina. The hospitals are, in effect, full. Beds are available in the maternity ward but there isn't staff enough to handle any more patients in the hospitals.
The limitation is certainly staffing. Regardless, the "hospital bed" count is reaching capacity in various deep-south states now-ish, and it will certainly reach it soon if nobody is willing to combat the spread of COVID.
We need to set up Field Hospitals, and let people know that if you didn't get the vaccine, you won't get ICU treatment if you are sick with COVID.
On the one hand, I assume that statement will be extremely unpopular. On the other hand, I'm not sure who is going to object to it (and why).
Beyond any ethical issues which are arguable there are no legal structures to support this kind of decision making on the side of physicians. Any physician making the decision to deny life-saving care on this basis would be taking his life savings and likely his license and shredding it.
Can't doctors make decisions like this when triage becomes unavoidable?
They can make decisions based on standard triage procedures, like determining which patients will benefit most from care. I don't think triaging on the basis of which patients were vaccinated would be any more legal than triaging on the basis of which patients were drinking, which patients were having unprotected sex, or which patients are citizens.
Sure it would. Vaccinated people are far more likely to have better outcomes with supportive care (which is all a hospital can do in the hypothetical overwhelmed state) than the unvaccinated, so if it comes down to triage, yeah the unvaccinated are going to be back of the line, since they're mostly likely not to make it no matter what you do.
What are you talking about? Age makes a far larger difference than vaccination status, so young and middle age unvaccinated people are going to be ahead of old vaccinated people. Two people of the same age and with the same level of respiratory distress are going to have basically the same prognosis, regardless of which one is vaccinated. It's only before people are hospitalized that vaccination status is a good predictor of outcome.
We should tell anyone who can't pay that they won't get treatment. I can't imagine who is going to object to it (and why).
I think there are pretty good reasons not to do triage on the basis of moral condemnation of peoples' past behavior or choices. I'm guessing that nearly all of the people who refused the vaccines are people who either were genuinely misled by the people they listened to, or people who were phobic about needles or doctors or vaccines or something. Being misled on a complicated technical and risk-balancing question isn't a moral failing.
That sounds good in the abstract, but if you have a heart attack patient who clearly won't benefit from treatment (either because they're already fine, or because they're too far gone) and a covid patient who will clearly benefit from treatment, I don't think you'll find any doctor who wants to waste the effort following your guidance.
They're saying the state is full, neighboring states are full. Do they fly ppl out? Who maintains a master list of available beds?
https://protect-public.hhs.gov/pages/hospital-utilization
I can't help but notice that only the second of those articles notes the absolute numbers of beds in the hospitals in question, and the two listed are 15 and 25 (which doesn't have an ICU, it is so small) bed hospitals, total.
I think that if they led with "Hospital with 15 beds fills up, needs to send sick people elsewhere" the effect of the shortage would seem a little weaker. I know they are very rural, low population density regions, but still, 15 hospital beds can get filled up by a good sized highway accident, or a bus roll over.
In terms of how hospitals manage beds, in many states it is regulated either by hospital boards or the state medical boards directly. Hospitals need what are sometimes called "certificates of need" to add more beds and the like, officially to avoid excess capacity and cut throat competition. It's a straight up cartel with state backing.
There's an article in the Aug. 23 New Yorker by Joshua Rothman about rationality. Though the article didn't mean to make this point, I thought it demonstrated the limits of a supposedly objective rationality in a lack of consideration of the difference between people's individual goals and personalities. Fuller discussion at http://kalimac.blogspot.com/2021/08/rationality.html
The article is an okay overview for the unitiated, and is an amusing contrast with the one they published last year when the NYT "doxxing" drama was gaining steam. Whereas that time around they mainly focused on controversies, here they don't even mention the likes of Scott, Yudkowsky and Hanson, nor the weird ideas like EA, AI risk or NRx, and atheism is only barely hinted at, with disapproval.
I'd say that the general thrust is roughly in the right direction, with the most salient point coming near the end: "I don’t think she would have been helped much by a book about rationality. In a sense, such books are written for the already rational." Of course the logical conclusion that therefore such education should begin at childhood would be far too radical for such a milquetoast piece, but at least they have enough sense not to warn against it.
Gonna shill for this substack I found: Desystemize (https://desystemize.substack.com).
It showcases how getting numbers that tell us actual facts about the universe is much harder than is commonly imagined, and how our Big Data methods are deluding us. Gives some hope regarding AI, as it suggests we're nowhere near as close to superintelligence as some fear.
Also gets extra kudos for getting me to engage with David Chapman's work (Meaningness, In the Cells of the Eggplant, Vividness), whom I was aware of, but was sleeping on. Pretty interesting that Chapman practices Vajrayana Buddhism. His vision of the world as both charnel ground and pure land is quite haunting (https://vividness.live/charnel-ground and https://vividness.live/pure-land).
I saw the most recent Desystemize post (on DCSS) and enjoyed it. Overall, I would endorse that blog.
I've finally been able to put my fuzzy and vague skepticism about blockchain and NFT's into coherent words, and I call it the "Degraded Blockchain Problem"
Curious what others think. Are there any blockchain based apps that actually get around this issue?
https://www.fortressofdoors.com/the-degraded-blockchain-problem/
I think for gaming the blockchain is mostly just letting you get around regulatory issues.
Imagine that these games were fully centralized and there was a marketplace where players could buy and sell items worth thousands of dollars. Then the game company becomes a broker of transfers of valuable objects and probably needs to do a bunch of stuff to ensure legal compliance (KYC / AML). If all transfers happen on the blockchain and the game company never has custody of any assets, then they likely have less legal liability.
Blockchains become really interesting when there's a reasonable possibility of competing UIs on top of the on-chain data. For instance with prediction markets, maybe with social networks, etc.
If no one entity has a monopoly on "the thing you care about", then it can work. For instance imagine an ecosystem of games that share characters and items. If one game developer severs the link between the NFTs and the in-game items, then they're cutting themselves off from this ecosystem. They're also likely violating the expectations of users in a more visible / traceable way than when normal centralized companies make changes to their games, possibly leading to more user revolt.
I don't know if anyone's actually written a blockchain game worth playing, but I think the key would be to make the information on the blockchain be something that users have to agree on to use the app together. E.g., for your "blockchain pokemon" example, the app could require you to prove that you own a particular monster on the blockchain before you can use it in a battle with another user.
Sure, you could write a new app that removes that backend requirement and allows anyone to battle with whatever Blockemon they want (like Pokemon Showdown does for the actual Pokemon franchise), but presumably the reason you're playing this game at all is because you both want to play with the game's artificial scarcity where you can be the only person on Earth who has a shiny Pikachu. If someone doesn't want to do that and wants to battle you with a team of six hacked Mewtwos, you can just... not play with them.
("You could just mod the game to do something else" is an argument that applies to every game, really, not just blockchain ones.)
It's still an extremely narrow use case - it's basically saying "the only reason to put your game on the blockchain is because your game is about playing with artificially scarce assets", which is a little circular. But I think that some MMO or CCG mechanics could fit into that box. And perhaps having it on a chain instead of on a central server could help if your game has RMT and you want to promise players that scarce assets will remain scarce? Still thinking this over.
The broader issue with blockchains is that trust is actually a good thing, so designing a system that doesn't require it isn't actually beneficial in any way. Some degree of social trust is a requirement for civilization, and the higher trust your society is, in general the better. Poor developing countries are very low-trust, the Nordics are the opposite.
The conspiratorial blockchain mindset is a perfect fit for our current global movement of populism. Institutions are bad, tear them down! Banks are, uh, bad somehow! Every element of civilization is corrupt! Ultimately this worldview ends in nihilism and never actually achieves anything (as Martin Gurri notes). It's unpopular to say institutions are actually a good thing, financial intermediaries like banks are actually a cornerstone of a functioning economy, political parties are probably good, etc. So no, removing trust and decentralizing everything doesn't really add any value, which is why blockchains have solved literally zero real world problems other than slightly improving black market payment infrastructure
Wait, why are political parties good?
I mean you could just read the Wiki page on political parties for a primer on them
https://en.wikipedia.org/wiki/Political_party#Why_political_parties_exist
The practical reason is that, in repeated real-world tests, most voters cannot identify anything about individual candidates without a party label. It's why nonpartisan/top two primaries, which have been tried for decades on the West Coast, failed to change US politics or elect more moderate candidates. Low information voters (aka most of them) lack the tools to analyze a candidate's stances, without that party label/branding. You get more celebrity/demagogue candidates without parties, and the fact that the US has the weakest parties in the developed world by a huge margin is responsible for, uh, recent celebrity demagogue politicians....
> in repeated real-world tests, most voters cannot identify anything about individual candidates without a party label
Huh? How have third parties moderately succeeded then? https://en.m.wikipedia.org/wiki/List_of_third_party_performances_in_United_States_presidential_elections . That seems like a weird bit of pop polling. Individual candidates can brand themselves as themselves, and this also seems like an argument against small parties or small parties rapidly growing to big ones, which happens a lot in other countries. And we get plenty of celeb demagogues with parties today. “Weakest parties in the developed world”? Plenty of other developed countries have some of their parties go from 50% to 20% or 10% representation, which seems weaker than Rs and D.
> low information voters lack tools to analyze candidate stances
What does this mean? I genuinely am unsure
I'm not sure what you mean by "third parties have moderately succeeded" - Jesse Venture got elected governor of Minnesota, and there have been a couple high profile Senators from Maine and Vermont and Alaska, but other than Jesse Ventura, even these "third party" candidates really had strong two-party branding (Bernie Sanders even ran for various Democratic party positions!)
"Weak party" is a technical term that means that elected representatives who are members of the party are allowed to vote their conscience instead of voting the party line. In the UK they occasionally allow this, but in the United States, you regularly have 10-20% of a party breaking with the party line on controversial votes. The lock-step voting of the Republican party in the Mitch McConnell era is a historic abnormality suggesting that the parties are greatly strengthening.
> low information voters lack tools to analyze candidate stances
Most voters get information about candidates from the candidate's TV ads, from the candidate's name, and from the party affiliation of the candidate. When it comes time to predict which way an elected representative will vote on an issue that comes up in the legislature, knowing their party affiliation is a very strong piece of information, but even in the weak party system that the US has, this is much stronger than all the other information that we, the public, have about elected officials.
Moderately high information people can judge that, say, Kyrsten Sinema and Joe Manchin are less likely to vote for the Democrats' infrastructure bill than Bernie Sanders or Chuck Schumer. But even very high information voters can't regularly predict which particular issues in a bill will be ones that particular candidates will object to.
Most voters delude themselves into thinking that they can judge better by using all the information they have than by just using the party name. But in practice, you are more accurate predicting a straight ticket line than predicting on the basis of campaign ads and felt personality.
I certainly agree with the underlying sentiment -- being both Norwegian and Texan it's really interesting to compare the ways trust works in both societies (Norway's trust is noticeably higher IMHO).
The one quibble I have is that there are certain things I would never want to trust anyone with and it's nice to have provably secure systems. My sensitive accounts, for instance. It's nice to know my passwords for those are not stored in plaintext and an employee couldn't compromise them even if they wanted to. But... I guess I am just taking that on trust because it's not like I can audit the servers. Which I guess is your point!
This is a very good point and I agree with you, but I want to comment to say that a system that requires a certain amount of trust does not *increase* trust in the system, necessarily at least. In other words, having a first currency requires society to have a certain amount of social trust, but creating a fiat currency will not automatically make people trust each other more.
I meant to write "fiat" but autocorrect wrote "first". My mistake.
Agreed, money is magical and only appears to be rational because it is logical.
Since you've actually read the bitcoin whitepaper you know more about this subject than I do. Still, I have to ask: I assume you've looked at Chainlink in depth? My impression was that this is the problem they're aiming to solve with their DONs
Eric Wall puts it better than I could:
https://ercwl.medium.com/whats-wrong-with-the-chainlink-2-0-whitepaper-for-simpletons-d50f27049464
Thanks! Also, damn...
In that case, my hopes for the future of blockchain are:
1) where a centralized authority is trying to establish information symmetry amongst untrusted actors e.g. Helium.com
2) purely digital use cases that don't require participating in the physical realm. It seems like information symmetry has a lot of potential. I don't know what impact this will have on society. It will be interesting to see the results from Bluesky
This sounded to me like the problem of having a system contain itself, "Nobody has yet found out how to cram an entire app entirely inside the confines of a blockchain without having to connect it to an external service that gives it meaning."
this is generally the case yes; actual decentralization is a very hard problem, so almost all projects more complicated than basic transactions/trading find many ways to 'cheat' and make some parts less decentralized than they should be, but such that they are still able to convince users and investors that the cheating doesn't really count (but almost no one really cares either way most of the time because they're all making money and having fun). NFTs (and much more than just pointing to centralized URIs, although it is possible to use less-centralized URIs like IPFS) are generally a good example of this in action.
There was a piece in The New Yorker this week about methane capture from the atmosphere (specifics weren't that interesting or important).
But, METHANE SHOULD BE TALKED ABOUT MORE!! Reducing methane emissions is the BEST lever we can pull to buy ourselves more time in the fight against climate change.
Methane's big unique ability, is that, unlike CO2, we can remove it from the atmosphere just by stopping new emissions! This means we don't have to develop methane capture technology!
My impression is that in many liberal circles, methane is overemphasized.
Although it has a very strong warming effect per unit mass, this is mitigated in two ways. First, methane is worth money so, in addition to regulations against leaking, there is a financial incentive not to leak it. More importantly, though, there are atmospheric processes that destroy methane; it has a half-life of about 12 years[1]. In contrast, ~50% of CO2 remains in the atmosphere after 50 years and ~25% remains after 1000 years[2].
This can be seen by comparing the AGGI bar for methane (CH4) to CO2[3]: CO2 keeps growing, but CH4 has not increased much in recent years (though N2O has).
[1]: https://cdiac.ess-dive.lbl.gov/pns/current_ghg.html
[2]: https://skepticalscience.com/why-global-warming-can-accelerate.html
[3]: https://gml.noaa.gov/aggi/aggi.html
Having said all that, I'm open to the possibility that reducing methane is more cost-effective, especially in the short term. In the long term, I think investments in clean energy R&D are probably better because once building nuclear/wind/solar/EGS/tidal power plants becomes the new normal, opposition to carbon taxes should melt away and we will not revert back to coal/gas plants. Thus the effect of clean energy investments is mostly permanent, while the price of avoiding methane leaks is constant vigilance.
I'm not on-board with "talk about methane more". If we're concerned about short-term issues, we can just detonate some nukes inside of volcanoes. If we are (correctly) concerned about long-term issues, the only factor that needs to be considered is CO2.
"we can just detonate some nukes inside of volcanoes"
Some part of my brain is going "That would be totally awesome, WE SHOULD DO IT!!!!" while the rest of my brain is trying to restrain it because no. No, we shouldn't.
I too have both of those reactions :D
The idea of short-term geoengineering to reduce temperature is one that shouldn't be dismissed out of hand, but even then there are far more controlled and safe ways to get dust into the upper atmosphere than nuking volcanoes, even if a tube tied to a weather balloon is far less epic.
"We don't know who struck first, us or them. But we know that it was us that scorched the sky."
Did anyone ever figure out if iron fertilization would actually make a significant different w/r/t carbon capture? There's something about fertilizing the oceans that just appeals to me. Kind of has a 50s "We'll farm the oceans!" feel to it.
Is the nuke supposed to make the volcano erupt, or stop it from erupting? Volcanic eruptions give off CO2.
They do, but they also give off large amounts of dust and SO2 (SO2 in the atmosphere winds up as H2SO4 aerosol sooner or later). Eruptions powerful enough to inject that dust and sulphur into the stratosphere tend to have net cooling effects, at least on a timescale of years.
Probably not a good idea. Global incoming shortwave affects photosynthesis, which both partially offsets the cooling (less photosynthesis -> more CO2) and puts the world food supply in doubt.
Increasing man-made structures' albedo is safe, and direct CO2 removal is obviously safe, but aerosols have a large potential to do more harm than good.
The long term and the short term are not unconnected! Buying ourselves more time in the short-term (by reducing methane emissions, or detonating nukes if you really want to) can massively improve our long-term outcomes (more time for tech development, price decreases, capacity building, etc.).
If the vast majority (or even the consensus) of people and governments aren't concerned about the long-term, I don't believe that short-term fixes will make a hill of beans of difference.
"averting disaster for now – kicking the can down the road – is the essence of success"
https://modeledbehavior.wordpress.com/2012/03/05/on-europe-tyler-and-i/
https://modeledbehavior.wordpress.com/2012/02/13/notes-on-life-notes-on-greece/
Sure! But will it be easier (technologically, financially, politically) to radically decarbonize the economy and drawdown megatons of CO2 in 2021 or 2045?
I could be wrong, but my guess is 2045
I think you're wrong. If we can't find agreement in 2021, why will we be able to find agreement in 2045?
(and "because of various climate disasters" isn't a valid argument; I believe we are presuming that short-term actions prevent those climate disasters from happening).
We're doing much better in 2021 than we were in 2010 at finding international (and domestic) agreement about drawing down carbon emissions. I don't see any reason why 2045 wouldn't be so much better (especially since internal combustion vehicles will all be over 10 years old by that point).
It's something that's a lot cheaper to do slowly than to do quickly.
Politically:
The people who will be in charge then will have been born in 1995. Very different cohort than today's decision makers.
But even if you disagree with me on that, technology plays a really important role here. Look at the price of renewable energy today vs 20 years ago. Will the same happen with direct air capture? Maybe!
We should also talk more about third-world CO2 emissions from burning wood and forests, which IIRC accounts for more than half of global CO2 emissions--but I can't even find a source for this critical number now!
Consideration of CO2 from third-world countries has been eliminated from all of the scientific studies on global warming that I've read, with the claim that we shouldn't count CO2 emissions from burning wood because it's "carbon neutral", meaning we can suck that carbon back out of the atmosphere by growing more wood. Now there are even people advocating switching to wood as a fuel source because it's "carbon neutral!"
I suspect this is driven by the desire to blame global warming on high-tech industrialized countries rather than on low-tech non-industrialized ones, and by the general romantic / leftist desire to demonize technology.
The problem is that you can't say both that
(A) climate change is so urgent that we need to reduce CO2 levels within the next 30 years, and
(B) we shouldn't worry about most of the CO2 now being emitted, because it will be re-absorbed within 100 years.
See, e.g.:
https://www.smithsonianmag.com/smart-news/epa-declares-burning-wood-carbon-neutral-180968880
https://www.climateinteractive.org/media-coverage/new-york-times-op-ed-burning-wood-is-worse-than-coal-for-the-climate
• The figure that wood burning accounts for more than half of all CO₂ emissions sounds highly dubious to me. Eyeballing the first chart I could find about the topic ( https://en.wikipedia.org/wiki/File:CO2_Emissions_by_Source_Since_1880.svg ), CO₂ emissions from land use changes (deforestation) are significant but much less than half of all emissions (they look like ~15%). These are about emissions from net deforestation, so they likely don't contain emissions that are compensated for by the growth of other forests. Also, poor countries (especially ones poor enough to mostly use wood for energy) just don't use that much energy. Now, some forests are burned not for energy, but to make way for agriculture; however, this results in net deforestation, so it should be included in the above ~15% figure.
• It's not just that the forests we cut down today for energy *will* grow back during the next 100 years. It's that while we cut down one forest, other forests are growing *right now* (e.g. ones we've planted in the place of forests we've cut down in the past few decades), so net emissions from reductions in the volume of forests are less than the total emissions from burning wood.
• Environmentalists have definitely been opposing e.g. forest burning in Brazil in the past few years.
• It isn't necessarily contradictory to argue that we need to reduce the amount of CO₂ we emit during the next 30 years, and that current CO₂ emissions aren't that much of a problem if they are going to be reabsorbed on a (say) 60–100 year time frame. I don't expect global warming to be anywhere near as harmful as environmentalists tend to say nowadays; however, the more sensible environmentalist argument isn't that the current emissions will lead to huge problems within 30 years; it's that the current and continuing emissions will lead to huge problems on longer time scales (perhaps 60–100 year). It's not realistic to expect that we will continue the current level of fossil fuel use for 30 years, and then abruptly reduce it to 0. Rather, it can be expected that we will continue to use some amount of fossil fuels for many decades (especially so if we don't take steps towards eliminating their use during the next few decades). Say we want to keep the total net CO₂ emissions over the next 100 years below a certain level (say, one leading to ≤2°C warming). We need to choose a trajectory. Likely the only feasible trajectory achieving that is to significantly reduce fossil fuel use already during the next few decades, and keep using a low and decreasing amount after that. At the same time, since we are looking at emissions on a 100 year time scale, burning wood that will grow back within 100 years is indeed net neutral in this calculation.
My point was that being "net neutral" doesn't help us at all until the year ~2100, and people are worried about what will happen before then. Reducing carbon from burning wood doesn't give permanent benefits, but gives just as much benefit as reducing carbon from other sources for the first hundred-or-whatever years, and we claim to be concerned about those years.
Reducing carbon from burning wood would be technologically easy. Just give out a billion Franklin stoves. It's still not cheap, but it could easily be more cost-effective than insulating houses, which is part of the current plan.
> we claim to be concerned about those years
Who are "we"? Again, IMO the more sensible concerns about global warming are the long-term ones; claims that it's going to be catastrophic within a few decades are definitely bullshit or hyperbole (but that doesn't necessarily imply that concerns about the longer term are also unfounded).
>Say we want to keep the total net CO₂ emissions over the next 100 years below a certain level (say, one leading to ≤2°C warming). We need to choose a trajectory. Likely the only feasible trajectory achieving that is to significantly reduce fossil fuel use already during the next few decades, and keep using a low and decreasing amount after that. At the same time, since we are looking at emissions on a 100 year time scale, burning wood that will grow back within 100 years is indeed net neutral in this calculation.
The current IPCC scenario for "just under 2 degrees at 2100" is "halved CO2 emissions by 2050, net-zero worldwide by 2075, -20% of current production after that i.e. large active capture". Their scenario for "just under 1.5 degrees at 2100" is "zero net CO2 emissions after right now, achieved by net-zero worldwide at 2050 and massive active capture afterward".
I recall reading that cow burps are (one of) the greatest source(s) of atmospheric methane and that adding seaweed to their diet vastly reduces that, but I don't know if there have been any developments in that area. I suspect that, since farmers do not benefit from eliminating methane emissions, this might have to be legislated to make a difference.
https://www.nationalgeographic.com/culture/article/seaweed-may-be-the-solution-for-burping-cows
Huh, is the other end of the cow a greater source?
Nope! 95% of enteric methane comes from burps
Ha! You have revealed my bias here. I am the founder of a company (www.alga.bio) trying to make that seaweed in the lab so we can dramatically scale up the use of it. (The secret is that farmers *do* have an incentive to adopt it.
But won't most of the methane we emit today be gone from the atmosphere by the time global warming may become a major problem anyway?
I'm not sure I totally understand your point. If methane emissions stay constant, methane concentrations in the atmosphere stay fairly constant as well. But if we cut emissions, concentrations will drop.
In the short term (20 years), methane has about 100x the warming potential of CO2, which means we could REALLY slow warming by reducing the concentration of methane in the atmosphere.
My point is that since methane is eliminated from the atmosphere in a few decades, emitting less methane this year leads to lower temperatures for the next few decades compared to the alternative where we don't reduce emissions this year; however, if we look at temperatures more than a few decades into the future, our methane emissions this year don't affect them.
I don't expect global warming to cause serious problems in the next few decades, even if it may cause serious problems farther into the future. Under these assumptions, there is little point in prioritizing reducing methane emissions today, if it's only likely to help during a period when the warming so far isn't a major problem yet, and won't help when (due to continued CO₂ emissions and continued warming) it may become a major problem. It will be worth cutting methane emissions when (if) we are at a point where the warming is expected to cause serious problems within a few decades.
> if we look at temperatures more than a few decades into the future, our methane emissions this year don't affect them.
This isn't right. Temperature a few decades in the future is shaped by all the net energy flows into the earth over the next few decades. If the present decade has greater net energy in-flow, then temperature a few decades from now will be higher than if the present decade has lower net energy in-flow.
Temperature a century from now = temperature currently, plus the net energy in-flow from each of the ten decades. If we can get the current decade lower and keep all the other decades the same, then we keep temperature a century from now from being as high as it would otherwise.
Doesn't work that way. The energy flow in and out of the Earth's surface is balanced each and every year. But different compositions of the atmosphere result in a different temperature being required to match the energy out with energy in.
I'm surprised at the claim that the energy flow in and out is balanced each and every year - I would have thought it could easily take years or decades to reach the new equilibrium temperature, when there's a change in atmospheric transparency at different wavelengths, or a change in surface albedo, or anything like that.
I'm under the impression that any given composition of the atmosphere leads to a certain equilibrium temperature. With a given composition of the atmosphere, a warmer Earth radiates more heat out into space. So it's not like any extra energy inflow will stay with with us forever as extra heat.
Now, I don't know how long it takes to reach that equilibrium temperature; perhaps it takes a long time. However, even then, assuming that the current level of greenhouse effect (even including methane) is lower than the level we will eventually reach due to continuing CO₂ emissions even assuming a relatively low trajectory of future emissions, the current methane emissions just make us reach the eventual equilibrium slightly faster; they neither increase the eventual equilibrium, nor make us reach a higher peak of temperatures than the eventual equilibrium.
That all sounds right to me. My claim is just about the medium term temperature, not the eventual equilibrium.
What are the chances that, if God exists, he practices Bayesian reasoning? Does a perfect God update God’s priors? Or does a perfect God have perfect priors to begin with?
Memo from the Irony Department: Did you know that first use of Bayesian reasoning was a proof that God exists? After Rev. Thomas Bayes died, his good friend Rev. Richard Price (also a mathematician) went through Bayes’s papers and found the unpublished essay on the "doctrine of chances", and Price had it published. The good Reverend Price then used the late Reverend Bayes doctrine of chances to formulate a proof for the existence of God. David Hume who had published his _Of Miracles_, which argued against the existence of miracles (and thus indirectly argued against the existence of God), wrote Price a nice letter saying that he found Price's argument to be most ingenious. I don't think Hume was convinced by Price's proof, though.
TLDR: Zohar, your question above and then running across this tidbit of information about the Bayesian proof of God started me thinking. As an agnostic I wondered if one could use Bayesian reasoning define the *type* of God that *might* exist with the given priors that we now know about the universe. Since you have a kool Kabbalist handle, Zohar, let's call this hypothetical god-entity Ein Sof ("the Unending") to honor the Kabbalists who were the first to de-Yahwehnize the concept God into something much more abstract.
Assumption: The Ein Sof god entity would be interested in a universe with physical constants that would be amenable to development of life. Unknown: if there might be other emergent phenomena that Ein Sof would be interested in (see item 1).
1. There's a high probability that this Ein Sof entity would be OK with emergent phenomena. And, considering the physical constants of the universe that we exist in, there’s a high likelihood that Ein Sof may be interested in a universe that could support the development of life. NB: despite the belief/claims of some Physicists that all the all the interactions of particles at the level quantum level of behavior have been pre-determined since the Big Bang, there’s no way to predict from currently known physical laws the emergent phenomena we’ve seen (Standard Model chemistry life consciousness). Therefore, since the laws of emergent phenomena don’t seem to be baked into the constants of our Universe, it’s likely that Ein Sof is interested in a certain level of randomness balanced against organizing attractors.
1a. QUESTION TO CONSIDER: Are there other emergent phenomena that have not yet manifested themselves in our universe, that we can’t conceive of, but that Ein Sof might be interested in seeing emerge? E.g. I’m put in mind of Carl Sagan’s quip that the universe is evolving toward the emergence of God. Could Ein Sof be trying to create a universe that will become aware of itself and functionally becoming another Ein Sof type entity?
2. What does Ein Sof know and how and when does Ein Sof know it?
2a. Assuming this God-entity cared about life, Ein Sof would need to know beforehand the values for the 26 dimensionless constants that create a universe where life can evolve, Ein Sof would have had to have acquired this knowledge somehow. One avenue is through omniscience. This seems unlikely though given the priors we see in our current universe (See item 3, but an item 4b scenario would work in favor of omniscience).
2b. Assuming Ein Sof did not know ab initio the ideal constants to encourage life, this entity would probably need to run experiments to get the correct recipe for the ideal dimensionless constants to promote life. In which case, this universe may be one of many experiments that Ein Sof has run, is running, or has yet to run.
3. Assuming that Ein Sof is interested in emergent phenomena, Ein Sof is probably not omniscient. If Ein Sof *were* omniscient, that would probably mean that all the phenomena of emergence are just an illusion. And likewise, Ein Sof wouldn’t need to create a universe if it already knew what would happen (and more importantly Ein Sof wouldn’t need to create a universe where some physicists need to believe that all particle interactions are non-random and pre-determined). Therefore, Ein Sof may not know the endgame for this universe; Ein Sof may not know its own ends; and Ein Sof may not be able to explain or prove its own existence, and it’s through the externalities of universe building that it’s trying to understand itself (see item 4a and item 5).
4. Given that the Yahweh theory of God that the ancient Israelites believed in (and that contemporary atheists use as a theist stalking horse) is almost certainly ruled out by our current understanding of the universe. Remaining theories of God which may or may not define Ein Sof:
4a. The universe is simulation run on giant computer. This is updated version of the of 18th Century Watchmaker Hypothesis that several modern Physicists (plus Elon Musk) have suggested. Supposedly there’s a mathematical proof that this scenario is impossible. I haven’t seen it, and I probably couldn’t understand it if I did see it, so I’ll leave that question as open. But if the universe is a giant computer simulations, we’re facing the Turtles All the Way Down problem, because that would mean Ein Sof is a computer programmer sitting in universe external to us. Where did that universe come from?
4b. The universe is the mind of God. This is an old hypothesis put forth by mystics in various schools of thought over the past two and a half millennia. If so, this raises some other interesting questions, such as: can Ein Sof observe the workings of its own mind? Are we the sensory apparatus of Ein Sof? Can Ein Sof explain itself to itself? And we face another Turtles All the Way Down problem of who or what created Ein Sof. Likewise, we may just be Ein Sof omnisciently thinking out how the universe is going to evolve. So, under the Mind of God hypothesis, the Ein Sof is very likely omniscient, but it’s too lazy to bother to build a universe.
4c. There’s no reason that there need be a single Ein Sof, there could also be Multi Sofs.
5. Under the Carl Sagan Quip Hypotheses that the universe is evolving into god (see item 1a), Ein Sof may not exist yet, but will exist in the future. In which case we might be the equivalent of Pre-Cambrian eukaryotes dreaming of our unknown descendant(s). But it would be kind of sucky for Ein Sof if it evolved just in time see the heat death of the universe — q.v. existential angst.
I hope you enjoyed this speculative romp. Remember, adding a God-entity to the universal equation violates Occam’s Razor. But for Occam’s Razor to work, it requires that there be a simpler explanation. Unfortunately, we don’t have any testable hypothesis for the existence of the universe (and I doubt if we ever will). If we remove the God-entity, we have to envision some meta-laws which would create the laws that this universe is based on, and those meta laws would have to matrix in which to work. So, either way we still are faced with the Turtles All the Way Down Problem.
> chances
What are gods priors on god having priors? His priors on what his priors are? His priors on what priors are? His priors that Bayesian reasoning is stupid?
The conceptions of Christian God I'm aware of all feature omniscience, which rules out the need for any type of reasoning at all, I would think.
Beyond that, I think the answer is just 'if you're making up a God, it does whatever you say it does.'
I'd say that when the God will be created, They will use Bayesian reasoning with more than 80% probability. Maybe later they will figure out something better.
1 Corinthians 13 gives some hints about knowledge and God from a Christian theological perspective.
"Love never fails. But where there are prophecies, they will cease; where there are tongues, they will be stilled; where there is knowledge, it will pass away. For we know in part and we prophecy in part, but when completeness comes, what is in part disappears. When I was a child, I talked like a child, I thought like a child, I reasoned like a child. When I became a man, I put childish ways behind me. For now we see only a reflection as in a mirror; then we shall see face to face. Now I know in part; then I shall know fully, even as I am fully known."
Paul expects that in heaven he'll have something more like the kind of mental faculties God has. And he's not even willing to use the word "knowledge" to describe those faculties without a few sentences of qualification. What we call "knowledge" is partial, like how a child's babbling resembles language or how a mirror's reflection (hammered bronze, at the time, very different from our smooth silvered glass planes) resembles the reflected object.
So, I think Paul would tell you that the answer is no. What God does is very unlike the thing we do called "reasoning" or "knowledge". That's a messy, complex, imprecise process that introduces all sorts of distortions. It's a partial, imperfect attempt to do something else, which is very different, and which is what God does.
Interesting you should ask. The phrase that Yahweh used when he spoke to Moses was אֶהְיֶה אֲשֶׁר אֶהְיֶה — which English-speaking theologians have translated as "I am who I am", but that entity's words can also be translated as "I will become what I choose to become". The medieval Jewish Kabbalists had a vision of Ein Sof as a dynamic force pouring creative energy into the cups of the ten Sefirot, which in turn overflowed into the physical and spiritual world (I may be oversimplifying, though). Whether Ein Sof has personality and intelligence and is omniscient aren't really questions that seem to be addressed in what little I've read. But the Kabbalistic world view seems to be totally down with emergent phenomena. So Ein Sof could be tuning the universe via Bayesian-like process.
Whenever I stumble across this sort of information about the Yahweh godhead I always walk away wondering why current Christianity seems so fucking devoid of well, anything. We've ended up with this total Chad of a monotheistic god and it blows. I might still be a Believer if there was room for a "becoming" god.
This feels like you and I aren't looking at the same Christianity. The fact that Christianity has, in Christ, a dynamic, becoming God is what puts it ahead of the other monotheistic religions, to my thinking. Christmas is about God becoming man. Good Friday is about The Living God becoming dead. Easter is about the dead Christ becoming alive again.
That’s a fascinating application. Thanks. Also kind of Hegelian.
I haven't read Hegel in the original, just critiques and summaries of Hegel. But Hegel seemed to be concerned about God being something that was in a state of becoming. But didn't he have something about God dying to become Jesus and Jesus dying to become God? If he lived a few centuries earlier, he would have been burned at the stake for writing that.
I think omniscience is pretty much isomorphic to having a prior of 100% in favor of every true proposition. So no update could ever be needed (or possible).
Agreed
So in what sense is there such a thing as free will, if we humans are incapable of surprising God?
The traditional Catholicy, non-Calvinist Protestanty, Orthodoxy, (I'm adding -y because I'm going to butcher the specifics here) way of thinking about it is this: just because God knows what you will choose doesn't mean you weren't free to choose it. It just means he already knows what your choice will be, because he is equally present in past, present, and future.
To think of it another way: I know I chose to eat Arby's for lunch today. I know it because it happened, and is in the past. Does the fact that I know what choice I made in the past mean that it wasn't an actual choice? Of course not. Does the fact that I cannot change that choice now, because it already happened, mean that I couldn't have chosen something different then? Most would say no. Well, God is in the same position that someone would be if they existed at the end of time and knew everything that had happened in history: such a person could exist without disproving free will, just as you knowing what choice you made yesterday doesn't disprove your free will. Since God is "outside of time", so to speak, he does exist in such a position and we shouldn't sweat it over whether we don't have free will just because he already knows what we did/are doing/will do.
OK then, in what sense does God have free will? Doesn't He need it to create the universe and all the people in it? If not...then the entire Intelligent Design argument, which is predicated on the *necessity* for an independently-choosing intelligence, falls to the ground. If God is compelled by perfect rationality, or perfect foreknowledge, then He is no more intelligent chooser of consequences than is the law of gravity "choosing" to make the apple fall down instead of up.
Hoo boy that's a tough one. I'm just a layman, and that's more of a *Summa Theologica* type question. From what I understand, God is free to be who he is. I Am what I Am, and all that.
Let me put it this way (probably badly): lets say you encountered a machine with two buttons. One button will give you something you want (lets say, $10,000) and the other button cause a rubber mallet to whack you in the kneecap. And lets say before you push a button you learn, with 100% confidence, which button is which. Well, in one sense you could argue that you aren't really free to choose which button any more: you'll obviously choose the $10,000 button. Yet in another sense you still have free will because you can know exercise your will perfectly. If you didn't know which button is which, your will and your actions might work at cross purposes: you will the $10,000 button, but in your ignorance you press the mallet button. So does learning which button is which remove your choice, or empower you to choose correctly?
Similarly, God wills a particular end to, well, everything, and acts in perfect accordance with acquiring that end because he has perfect knowledge and perfect power. In one sense you could say he is not free, because he will always choose to do that which is in accordance with his will. But that's a hollow kind of freedom, the freedom only to make mistakes, to frustrate your will.
I hope that makes sense.
The Abrahamic God is omniscient, therefore Bayes is useless to Him.
Depends on which Abrahamic God you're talking about. Actually, I think only modern Christian sects go with the full omniscience jive while at the same time separating God from the rest of the universe. I guess (some) Sufis also see Allah as omniscient, but he's omniscient through all living beings, since we're all emanations of the godhead — I've sort of internalized it as we're all little neurons in God's brain.
Can you find any references to back up this "only modern Christian sects" claim? This sounds like the sort of thing that I should be able to find at least implicitly in the Summa Theologica...
The fact that Aquinas expends so much effort on Question 14 (the question of God's omniscience) suggests that there was significant debate on this issue among medieval scholastics (at least in the 13th Century). Yes, I suspect that most modern Christian sects derive their arguments for God's omniscience either directly or indirectlyr from Aquinas — directly as in an agreement with his logic or indirectly as a response in disagreement to his logic. But old "Dumb Ox" Aquinas had too much meat on his arguments for Christian theologians to ignore him. It took a few centuries and a bunch of Popes before Aquinas was accepted as holding the "correct" views by the Catholic Church. And even though (I gather) most Protestant sects regard Aquinas with a jaundiced eye — being that his arguments were contaminated by "pagan" thought (i.e. Aquinas depended too much on Aristotle's reasoning methods) — I don't think I'm wrong when I say most modern Christian sects seem to be on board with his basic claim of God's omniscience.
Doesn't Aquinas *also* spend a lot of effort on proving that God exists? There was a lot of debate about proofs of the existence of God, but there were very few, if any, participants in these discussions that had significant doubts about the existence of God. So I don't think you can use the existence of lengthy arguments as proof that the majority of the community disputed the conclusions of those arguments.
(Besides, many of the classic proofs that date back to well before Aquinas lead directly to a God with the three-O's of omniscience, omnipotence, and omnibenevolence.)
What I don't understand is why people seem to have trouble with the idea that religions are not static systems of thought. Religions evolve. And the reason they evolve is because co-religionists argue amongst themselves about the meanings and implications of their beliefs.
But I never said the *majority* disputed the conclusions. I just said there was "significant debate". Significant, in that some people were pissed off enough to burn those with the minority opinion at the stake for disagreeing with dominant dogma.
Boethius and St. Augustine argued against the First O using the argument the omniscience precluded human free will. The only reason I know this is because I had to write a term paper on the arguments for free will a long long time ago in college far far away. I'm not sure about the history of the Second and Third O, though. And just doing some quick Googling, it looks like there are a bunch of contemporary theologians who are uncomfortable with the omniscience argument. You'd think I was threatening certain people's core beliefs or something...
But if at the time there was "significant debate" about the matter, then that must mean there was a significant fraction of believers/theologians who did hold the now-prevalent view on omniscience! In that case, wouldn't your original claim be more accurately phrased as this view only being dominant/prevalent/universal in modern Christianity, rather than suggesting it was (mostly?) absent in more ancient times?
Good point. Except that I won't go as far as to say it was always absent in ancient times. At some point the rabbis, pre-rabbis, or the priests started wondering about the nature of omniscience. I don't know when. It may have been as late as Third Century BCE with influence of the Greeks and their logical reasoning. I don't think there was necessarily agreement until the pro-omniscient schools of thought became the dominant dogma. It was being debated as late as Tenth Century CE by rabbinic schools of the law vs mystical schools (Maimonides isn't preaching to the choir, he's preaching to his philosophical opponents). Probably this continued as the Kabbalist schools developed in the late medieval period. Likewise, the Arab philosophers were debating the question (as late as the Eleventh Century). In fact one Sufi philosopher (whose name escapes me right now) actually suggested that our consciousness was part of Allah's larger consciousness — and thus being part of god we *were* God. That didn't go over very well with the legalist schools. And as I said above Thomas Aquinas's discourse on the problem of omniscience was probably directed at scholastics who had differing views of the matter.
No, this is a core tenet of classical theism, so in Christianity it dates back to the early church fathers at minimum: Clement of Alexandria, for one. And it continues through history as an understanding of every mainstream sect, so you get folks as far apart as Augustine, Aquinas, and the Westminster Divines all supporting it.
That said, though you won’t find the formal word ‘omniscience,’ the better argument is that both Testaments are quite clear on this point: the Psalms for example insist repeatedly that God ‘knows all things,’ that His ‘understanding is infinite.’ Likewise the Gospels ascribe to God knowledge of even very trivial things, as well as knowledge of even things in the heart that men don’t know about themselves. The epistles also say and imply that God knows everything quite a bit.
It may the core tenet of the contemporary Catholic theism, but I think you're mistaking the Christianity of today as the only form of Christianity that's ever been.
Be that as it may, all the way back in the Garden, it's clear that Elohim/Yahweh was not all-knowing. Genesis 3:9 - But the Lord God called to the man, “Where are you?” So Elohim didn't know where Adam was hiding. And then in Genesis 3:11 - And he said, “Who told you that you were naked? Have you eaten from the tree that I commanded you not to eat from?” So, we see that the LORD didn't see dreadful apple eating event go down.
Judaism before Christianity had no fixation on Yahweh being omniscient and omnipotent. The rabbinical mystics of the Merkavah traditions had the myth of Enoch being carried up the Heaven (or rather down to Heaven, because in the Merkavah school of though the Earth floats above Heaven). Enoch sat on the winged chariot throne of Yahweh and Enoch was transformed into the Angel Metatron, who became Yahweh's Mini-Me. Yahweh couldn't be everywhere at once or see everything going on everywhere, so he delegated to Metatron. More orthodox rabbinic schools discouraged this belief (labeling it heresy), because smacked of polytheism — but questions about Yahweh's omniscience must have lingered up through the 10th Century, because Maimonides spends a lot of time in his _Guide for the Perplexed_ arguing for God's omniscience. The fact that he's arguing this point implies that there were schools of thought that disagreed with his position.
And there are threads of gnostic dualism interwoven into various Christian sects up through the 14th Century. For instance, the Cathars believed that evil was independent actor from God, and that God had no control over what evil did and evil could conceal its actions from God. The Catholic Church spent a lot of time inveigling against dualism (and burning dualists at the stake), but even today many Christian sects still see the Devil as an independent actor (with all sorts of theological gymnastics to explain how the Devil can be an independent actor — i.e. not of God — but still be part of God's plan (which makes God's omniscience seem sort of intermittent).
Much of this early textual evidence you are appealing to describes a tradition that is clearly not monotheistic in the modern sense of having a single creator God that is in charge of all existence, but rather having many deities, each significant to a particular tribe. It's no surprise that *those* traditions would have different views on omniscience than truly monotheistic Judaism and its successor religions.
You are basing an entire theology on the very slim reed of believing that God doesn’t ask rhetorical questions, when it is obvious from Job that in fact He does and it seems to be one of His favorite things.
“Judaism before Christianity” I make no argument about Judaism outside of Christianity, but note that good-faith arguments about religions should probably start with mainstream beliefs and work outward rather than starting with the fringe mystics and their apocryphal writings and then never arrive at orthodox thought.
“Catholic tenet” I assume you’re not ignoring my points, you just don’t have the background in Christianity to catch them, but the Westminster Divines are 100% Protestants. Far from being a Catholic tenet, it’s actually held *more* strongly by the Reformers. And as I alluded before, RCs are only one inheritor of the early church fathers. You e also got to reckon with the Eastern Orthodox, the N. African church, etc., who all espouse divine omniscience. This does not seem like a winning position.
“Many Christian sects still see the devil as an independent actor” This seems like a misunderstanding: even Arminians believe that God has foreknowledge of the choices we make, he just is not the cause of them. If you genuinely believe that Satan is independent from God, you’re outside of the theologically orthodox Christian tradition and a pretty rare bird besides that: y’know, a perfect match for the Cathars. I’m just you could find some more historical heretics that believe this, but given your original comment that would be moving the goalposts pretty significantly.
Good faith arguments about religion shouldn't assume all co-religionists of a certain class have uniform views. You're claiming there's a orthodoxy of thought that always existed where it may only tenuously existed. It's like atheists using the Bearded Yahweh throwing thunderbolts as stalking horse for God. I may not be as knowledgeable about the ins-and-outs Christian Orthodoxy as you, but I think it was Boethius who first posed the question that If God omniscient it implies that humans have no free will. I may be mangling his argument, but if God knows the past, future like he knows the present, it is difficult to see how humans have any agency. Certainly, my Puritan ancestors took that as a given that we had no agency, and nothing we could do would change whether God had decided we were damned of saved.
You've forced me dust off my old Aquinas. Holy Yahweh he can be a tedious read! But if I recall my history, Aquinas got into trouble for some of the conclusions he put made about God's omniscience in Question 14 — he made God's omniscience to be *too* inflexible. It took a few centuries for the Church to come around to his views — just in time for the Reformation.
Anyway, I don't find your arguments convincing because (a) they are based mostly on how the Church (however you want to define it) sees itself today, and because (b) contemporary Christianity has a selective memory of past events, and they ignore all the unpleasant arguments went down between medieval scholastics about God's omniscience. Likewise they brush under the theological carpet all the alternative Christian belief systems that were out there but were branded heretical by the establishment theocrats of the time (and therefore we should think no more about them, perish the thought!).
"Where are you?" and "Have you eaten from the tree?" could be rhetorical questions.
Also devices to enliven storytelling. IIRC there was an oral tradition element to these stories and concepts, and the audience was all ages.
This assumes the ancient Israelites needed to claim that the words of Yahweh/Elohim were a rhetorical device to make the Eden myth consistent with their beliefs. I would argue that as a band of bronze-age goat herders the ancient Israelites hadn't gotten around to considering the question of Yahweh's omniscience, yet.
My point is that religions evolve. The Judaism practiced today is different from the way that religion was practiced before the Diaspora, and the religion of the Jews under Roman rule was different from the way the ancient Israelites practiced their religion.
Signal boosting; Godoth is correct here. This is a core belief, if not always expressed as "omniscience".
> I've sort of internalized it as we're all little neurons in God's brain.
While I haven't quite put it like that, I've been thinking atheists are like neurons that deny the existence of people. Funny coincidence.
And atheists are the only ones who still envision God as being a bearded Yahweh...
Hahahaha!
God's particularly pure probationary priors proceed to perfect papacies.
If God is infinite, spanning time and space, how can anything be prior?
That’s kind of a cheat, haha.
Maybe reality is the mind of God and events transpiring are God’s priors updating.
I had to google St. Francis of Assisi - which then took me to Loyola/Jesuit theological constructs. "recognize, like Jesuit Gerard Manley Hopkins, that “the world is charged with the grandeur of God”—the positive, energetic and engaged vision of God's constant interaction with creation" - of course finding one's theology in poetry only is a limitation. And there would be a distinction between "God is the world" versus "God in the world/spirit in the world." Being charged with the grandeur of it might not be identical to being it. But if God is always and only separate from the world, all sorts of irritating sequelae appear; where exactly is the boundary? Measured in what units? There seems to me to be a distinction between "spirit" ie whatever is pouring, and an omniscient being/higher intelligence - but where would that line be? It usually devolves into paradox, which to me means that human reason is having trouble with it and there's some other phenomena going on.
Approaching it can be interesting though. If God doesn't span time, when did God pop in and decide to stay? If God doesn't span space, where can we go that is outside God? Are there higher-order beings/intelligences which interact with us but which are not "God" per se?
I need to know more about Christian theology than I do. Intermittently I have suspected that one of the major motivations for a priesthood, historically, was that political leaders realized that anyone with the patience and interest in debating philosophy would engineer a doomsday machine eventually if left to their own devices. So part of the point of religious practice was to keep the brains busy and out of the way (not the opiate of the masses per se, the opiate of the earlier-stage technical class). That being said, I think spirituality is an important element of life, and that perhaps some of the brains were kept a little too busy and lost touch with what they were trying to describe.
Sometimes I feel uncomfortable about borrowing parts of Christianity and meshing it with quasi-animist handwaving. Then I say, no, that's a very Christian experience, contemplating and discovering again and again what a "one God" construct would be. There's a "history + philosophy" aspect of doctrine in which some of it seems to be due to history or political choices, and some of it seems to be observations on the nature of reality. I don't know why the early Christians jettisoned reincarnation, for example, or women as religious leaders.
But having God relegated to an upstairs room, coming down occasionally to yell at the family or demand dinner, makes possible all kinds of great stories about human nature.
The obvious retort to this is some statement of compatibilism. It certainly doesn't seem obvious to me that free will can't be compatible with one's choices being subject to deterministically knowable. For example, if I know my friend is thirsty and I offer him water, I know he'll accept but it doesn't feel like his decision is not free in any meaningful sense.
I suppose ultimately this is basically a semantic discussion, but I find it hard not to side with the compatibilist view when I consider that, in a Universe fully described by (deterministic) General Relativity plus the actions of a "pseudo-free" agent who always makes truly random choices, everything is still completely knowable to a God who can observe the entirety of the four-dimensional Universe from "outside".
This week there was a cool new gene therapy paper that used human proteins (instead of viruses) to package mRNA. So far it's only been tested in cell culture but in my opinion it's very promising. I wrote about it here: https://denovo.substack.com/p/mrna-delivery-gone-non-viral
I've also continued my human herpesvirus series; this week's post is about varicella-zoster virus (which causes chickenpox and shingles). It's the only human herpesvirus for which effective vaccines exist. https://denovo.substack.com/p/varicella-zoster-virus-a-rare-success
Thanks for these, will read in the AM.
Do we have any of Neumann's remains left for a sequencing? (I imagine the Manhattan Project members had regular blood work done, maybe some got stored?)
Any guesses of what would happen if an egg and a sperm both got rewritten with the relevant SNPs and other interesting alleles?
Neumann was buried normally in accordance with his Roman Catholicism (cremation still being look on askance). I've never read of any known biological samples surviving the way you do for Einstein. (For all his nuclear-related work, he was not much involved close-up - he was a mathematician, working on computing - so he would be a somewhat improbable candidate for sampling, although people always suggest that the cancer might've been related.) As far as partial overwriting goes, you'd get a lot of regression to the mean under the additive model, and under the emergenesis model, you'd be utterly disappointed. (Remember: he had a daughter and two grandchildren. So 50% and 25%. His daughter is accomplished but you probably haven't heard of her: https://en.wikipedia.org/wiki/Marina_von_Neumann_Whitman The grandchildren don't even rate WP entries.) That's why people shoot the breeze about full blown cloning: minimize regression, and preserve any interactions or emergenesis-like nonlinearities.
Does anyone here know how long the DNA in bone marrow would last? I would naively expect it to still be extractable, given an intact skeleton
The ancient DNA people (eg the Reich lab) go for the inner ear bones, of all things. Turns out to be the best place for preserving DNA intact even over hundreds of thousands of years. ("I [petrous bone]t is one of the densest bones in the body." - who knew?!)
Malcolm Russell Whitman, the grandson, is a Harvard professor. https://connects.catalyst.harvard.edu/Profiles/display/Person/1202
Laura Whitman, the granddaughter, is a professor at Yale. https://medicine.yale.edu/profile/laura_whitman/
They are arguably working at phenomenal levels of intellectual capacity, and do deserve the Wikipedia pages they seem to not have gotten
My favorite multi-generational family of talent is the Curies:
https://en.wikipedia.org/wiki/Curie_family
Being a dental school professor and an assistant professor is pretty good, but if you had invested hundreds of millions of dollars and multiple felonies into raiding von Neumann's grave to genetically engineering & part-clone one of the greatest mathematicians in history expecting breakthroughs like his, then I think you would be extremely disappointed. Going from von Neumann to... them, seems like plenty of regression to the mean to me.
Note the assortative mating and family environment means that the regression there may not be as much as you'd predict.
Cloning Bentham is probably going to be harder because descriptions of his 'auto-icon' sound like there's not that much left and the mummification/preservation process will be much harsher on DNA over 189 years than von Neumann's buried body over 64 years. I don't think there would be much point, though, compared to von Neumann: Bentham had an impact more because of his ideas and his personality entertaining ideas others simply refused to entertain, but despite a bit of child prodigality, he never struck me as important due to raw abilities of the sort you might expect to be highly heritable. He was a bullet biter, but we have plenty of them around today.
Sequencing dust: If you couldn't use the body, then yeah, it's hypothetically possible to sequence ambient traces instead ( https://www.gwern.net/Embryo-selection#glue-robbers-sequencing-nobelists-using-collectible-letters ). Humans shed astonishing amounts of DNA, and DNA sequencing has gotten astonishingly sensitive. It is not at all out of the question to recover DNA from things like licked stamps, and you could likely find some correspondence von Neuman licked himself, and there you go. I was amused by this idea so I did a bit of looking a while ago, and collecting letters/stamps is surprisingly affordable, and there's no 'copyright' or 'patent' which their descendants might inherit or 'privacy right', because the cells/DNA are considered property which they disposed of and you have purchased in full. If you have a stamp that, say, Albert Einstein licked (would cost only a few thousand dollars), there's no reason at all you couldn't just go sequence it with forensic-level genome sequencing and just post it online... So I proposed that you could invest something like $100k into buying up memorabilia from the likes of von Neumann as a de facto, implicit DNA biobank of extraordinary individuals - none of this appears to be priced into current collectibles. :)
I don’t really want to hyperstition this into happening, but would one rather have 100 JvNs or 1 JvN plus 99 other extremely intelligent historical mathematicians/scientists/philosophers etc of your choice? And could one offer JvN or the mixed philosopher sperm/eggs to everyday people who want to have kids with them - more assistant professors can’t hurt
Your “why aren’t more identical twins working together” article might mean that even clones won’t be “as good”?
I would go with the latter. There is presumably some level of diminishing returns from JvNs running around, some diversity of phenotype is likely to help reduce each others' diminishing returns from unblocking each others' bottlenecks, you have no guarantee of replicating the X-factor or even that the X-factor is genetic to begin with so putting a lot of eggs into one basket, and the latter will have much more research value/VoI. I'm still not too sure what to think of the lack of eminent twin pairs: twins come with a bit of a biological penalty so there's a tail effect which I'm not sure if it's important or not, and there may also be another log-normal/pipeline/emergenesis-like illusion going on with identical twins coordinating very effectively but simply not sharing identical rage for mastery or idiosyncratic 'special interests' to a sufficiently high probability to show up (although this would not be an issue with enough clones as with enough of them, eventually special interests would collide - birthday paradox).
Given that DNA does degrade with time, would this not be a genuinely worthwhile investment towards he day when cloning tech is advanced enough to make use of the genomes? 6 figures seems extremely cheap given the potential upside
Yeah, that was kind of my thinking. Cloning, sequencing to run GWASes on or to upload to geneologies, proteomics, studying their health from the residues, plenty of uses, institutions holding existing letters sure as heck won't let you do it, the collectibles are ridiculously cheap and can be stored in a tiny space for cheap, $100k barely buys you anything in other fields, why not?
But you have to admit, it's a super-weird thing to do, it wouldn't pay off for decades, I don't happen to be a billionaire who could casually punt $100k on something so speculative, and there's a good counter-argument that given the timescale and how many collectibles are floating around, there's no point in doing it *now* - even when it becomes totally normalized to sequence some famous person from a stamp or hairbrush, there'll probably be enough floating around (especially if you aren't too picky about which people) that you can buy whatever you need for cheap then too.
I don't really think we have a lot of regression to the mean going on here....unless Neumann was so phenomenally smart that despite two generations of regression to the mean, his grandchildren are Harvard and Yale professors
There was one physicist who said something to the effect "von Neumann was demigod, but he could imitate humans almost perfectly." Can't find the quote now or who said it. But it stuck in my head.
There are a hundred thousand top professors at top colleges if you’re willing to spread out a bit. There’s only one von Neumann, and maybe (?) hundreds of people who demonstrated his caliber. Doesn’t quite compare.
Heavens, no. Harvard has less than 1000 tenured faculty, and less than 300 in the schools of science and engineering. If you gathered all the science and engineering faculty in the top 20 universities, I think you'd have about 5,000-8,000 people total.
Professor in general, not science professor, should’ve specified! I picked the larger reference class for better comparison. His daughter, Marina von Neumann Whitman, is professor of business administration and public policy at the university of Michigan
Mind you, if you restrict yourself to peer institutions of Harvard and Yale, that might be about 5-10 universities, depending on the field, and so about 1500-2500 people.
To the last: yes, he was that phenomenally smart. John von Neumann is a very good candidate for the smartest person who ever lived. From Wikipedia: "Nobel Laureate Hans Bethe said 'I have sometimes wondered whether a brain like von Neumann's does not indicate a species superior to that of man'". Read the rest of the "Cognitive Abilities" section of his Wikipedia page—it's quite something.
I have recently concluded, on the basis of a bunch of reading, that animal foods are basically bad for us. Or at least bad for people like me who are at high risk of a lot chronic diseases. I'm sort of confused by how I could have followed nutritional news pretty closely for years and years without having found this out before. Or maybe I'm confused now. Anyway, that's what's on my mind this week.
Hm, I've come to the opposite conclusion. I ate very little meat the first 30 years of my life and was constantly tired and had a lot of digestive problems. Switched to a lot more meat and my health and weight and energy are vastly improved.
I think people just like the "idea" that animal products are bad because...well, it's gross and mean and requires killing animals. Much nicer to think of just eating things that are alive because of photosynthesis than eating flesh or dairy. But in reality, plant matter is much, MUCH more difficult to digest, 95% of it is actually toxic to humans, it gives you gas, and it's very difficult to get all vitamins and minerals from plants. Meat (1) is way easier to digest, (2) is never toxic when unspoiled, and (3) has all the vitamins and minerals you need.
So I'm not buying it. The more you feed dogs meat and not fruits and vegetables, the less gas and puking and digestive issues they have. We're omnivores just like them, and don't have the long digestive systems and multiple stomachs that ruminant vegetarian species have. We have canines and the front-facing eyes of predatory species. We are clearly designed to eat meat.
The problem is, there are really too many humans for most of us to eat a meat-heavy diet. And factory farming is horrible and cruel for the animals. So in my mind, those are the problems, but meat is and animal products are extremely good for you.
Plant matter is NOT “95%” toxic. Hunter gatherer diets very consistently contain lots of varying plant matter - fruits, roots, seeds, herbs - and has been so for almost all of evolutionary history.
> The more you feed dogs meat and not fruits and vegetables
Yeah because they’re dogs. They eat lots of animals. But even wild wolves eat a decent amount of plants.
> In addition, plant matter is prevalent in wolves' summer diet, with 392 (74%) of 530 scats analyzed containing some type of plant material, largely grass (Graminae). This is consistent with summer observations of wolves consuming grass and other plant material.
Most animals will eat plants or animals if they’re available. Even deer will occasionally grab insects or small mammals.
> is never toxic when unspoiled
Also wrong. Trivial examples: polar bear liver, poisonous fish, some shark meat when pregnant, transmissible CJD and BSE, *parasites*, etc...
What is up with nutrition knowledge lmao?
Yeah most plants ARE toxic to humans. Try eating the vast majority of leaves, plants, trees, grasses...you cannot. You will get sick and will not be able to digest it. Unlike a ruminant animal that has the digestive system to handle it. The plants that humans can actually eat out of the whole range of plants is extremely small...estimates range from 5% to 45% (and in the high case it only works with cooking it to pre digest it). In contrast, we can eat virtually all meat/animals. You have listed a tiny handful of examples out of the thousands of options but we can pretty much eat 99.9% of animals, including even poisonous snakes like rattlers, etc. Also I never said dogs and wolves don't eat plants, they are omnivores like us, nor did I say that humans should not eat plants. Merely that the idea that plants are healthier than meat is wrong headed and has no good evidence and plenty of evidence for the opposite conclusion. There are humans that almost solely eat animal products and those that almost solely eat plants, we are variable. The primary issue with meat is not that it is bad for you but that it is bad for the animals and the global ecosystem...apex predators like humans running 7+ billion deep is the problem,, if they're eating meat.
Oh I see now. Non nutritious isn’t “toxic”. You can eat as many leaves as you want, but you won’t die - maybe feel a bit bad and have gas, but that’s not toxic. Plenty of herbivores aren’t generalists as well, especially smaller ones and insects. And to say we can eat all animals is ... can you eat bones? Teeth? Hair? That’s a “toxic” part of animals.
Plants are not healthier than meat: yes
Meat is healthier than plants: not coherent, it’s like saying water is healthier than vitamins.
Meat would probably be a lot better for “ecosystems” if grown differently, just like plants. And there is a lot of marginal land much more fit for grazing than intensive cultivation.
Toxic doesn't mean it kills you, it means it's bad for you. A toxin. The point is, evolution designs organisms to not want to be eaten. Animals can hide and run away to not be eaten. Plants cannot, and they have therefore evolved all kinds of strategies to not be eaten, such as being toxic/poisonous, having thorns, etc. (of course, in the case of fruit, the plant DOES want that part to be eaten so the seeds can be distributed in scat, but it doesn't want the rest eaten, so the fruit is delicious and tempting while the rest of the plant is not).
I think in a head-to-head match-up, meat IS better for you than plants. Stop talking about teeth, feathers, and bones, we're talking about meat and dairy. Meat is far easier to digest than plants (doesn't cause gas, bloating, vomiting, and isn't so undigestible it just passes right through like much plant matter). It has all vitamins and minerals and on a per-calorie basis provides far more nutrients than plants. There is zero evidence that it is even possible to become obese on a meat-only diet and pure carnivores aren't fat. If you want to fatten up an animal, you give them corn and wheat, not meat. Notably, when Americans started getting fat in the 80s, that's exactly when their consumption of animal products went DOWN, as today, only about a third of calories come from meat and dairy, while in the past it was more like 50% or more.
The problem with studies comparing a vegan or vegetarian diet with a meat-eating diet is that they are not doing a true comparison, as they are comparing it with people who eat meat as maybe 30-40% of their diet, and everything else they eat is plant-based....french fries fried in vegetable oil, breads, etc. To make a true comparison of which is healthier, you would compare a vegan diet with a diet that solely consists of animal products...for example the Massai, which have a diet that is almost 100% milk, meat, and blood. If you're going to go with a direct comparison, I'd place my vote on the animal products diet being healthier. Though it is ideal to eat a range of things. My ideal diet is about 70% meat and dairy and 30% fruits, vegetables, and grains.
Minor point, but I don't think you're right that meat is easier to digest than (edible) plants. In fact, it's probably generally significantly harder, which is why we need this complex two-part digestive tract, with radically different pH in each. Proteins need to be pried apart before they can be digested, which requires the low pH and acid of the stomach, and only then can they be chopped up, which happens partly in the stomach and partly in the small intestine.
By contrast, carbohydrates (with the exception of the cellulose to which you refer further up) and fats are simple to chop into bits, chemically speaking, and readily burned or reconfigured for storage.
Also, amino acids are pain to store, so the body apparently doesn't, and they're dangerous to recycle and burn, because the amino group cannot easily be oxidized (in higher animals, there are bacteria that can) and when it is pried off of the amino acid it readily forms ammonia, which is exceedingly toxic -- requiring some fancy biochemistry in all animals higher than fish to get rid of the stuff safely. In humans were obliged to throw away 1 perfectly good carbon atom for every 2 N atoms we don't want, which is not bad but obviously inefficient.
You're certainly right it's more nutritious of course. The most nutritious food for humans would be other humans, ground up and properly sterilized.
I think most of the evidence of poor health outcomes is related to 'red meat', not animal foods more generically. And that evidence is typically about "high levels" of red meat consumption. A search "red meat" on pubmed.gov produces a large number of studies; none are randomized trials, and a lot are junk, but the gist is that high levels of red meat consumption are linked with different digestive cancers, heart disease, and type II diabetes, among others.
Anecdotally, I find red meat hard to digest, and have avoided it for decades; more recently, I contracted the alpha gal allergy and break out in hives if I eat any mammal that is not a primate. So I don't really have to think about this any.
I would emphatically disagree with red meat being unhealthy. There’s the HG comparison - bison hunters were fine. Red meat isn’t correlated with fat either - bison meat is leaner than chicken (and is delicious).
> > Red meat intake was not associated with CHD (n=4 studies; relative risk per 100-g serving per day=1.00; 95% confidence interval, 0.81 to 1.23; P for heterogeneity=0.36) or diabetes mellitus (n=5; relative risk=1.16; 95% confidence interval, 0.92 to 1.46; P=0.25). Conversely, processed meat intake was associated with 42% higher risk of CHD (n=5; relative risk per 50-g serving per day=1.42; 95% confidence interval, 1.07 to 1.89; P=0.04) and 19% higher risk of diabetes mellitus (n=7; relative risk=1.19; 95% confidence interval, 1.11 to 1.27; P<0.001).
From the LW post study below, but that’s not super reliable either.
I'm not sure that the HG comparison is meaningful, most of the effects of red meat seem to be on chronic diseases that occur later in life - beyond the typical lifespan of an HG - also, it probably doesn't matter what you eat if you are running many miles per day to catch it. And of course, just because HGs did it, doesn't mean it was good for them.
Most current evidence is not regarding bison but regarding farmed beef and pork. But here are a few positive meta-analyses:
Ischemic heart disease
https://pubmed.ncbi.nlm.nih.gov/34284672/
"Thirteen published articles were included (ntotal = 1,427,989; ncases = 32,630). Higher consumption of unprocessed red meat was associated with a 9% (relative risk (RR) per 50 g/day higher intake, 1.09; 95% confidence intervals (CI), 1.06 to 1.12; nstudies = 12) and processed meat intake with an 18% higher risk of IHD (1.18; 95% CI, 1.12 to 1.25; nstudies = 10)."
Breast cancer
https://pubmed.ncbi.nlm.nih.gov/33271590/
"Positive associations were observed for red (RR per 100 g/d, 1.10; 95% CI, 1.03-1.18) and processed meat (RR per 50 g/d, 1.18; 95% CI, 1.04-1.33). None of the other food groups were significantly associated with breast cancer risk."
Cancer:
https://pubmed.ncbi.nlm.nih.gov/33838606/
"The purpose of this umbrella review was to evaluate the quality of evidence, validity and biases of the associations between red and processed meat consumption and multiple cancer outcomes according to existing systematic reviews and meta-analyses. The umbrella review identified 72 meta-analyses with 20 unique outcomes for red meat and 19 unique outcomes for processed meat. Red meat consumption was associated with increased risk of overall cancer mortality, non-Hodgkin lymphoma (NHL), bladder, breast, colorectal, endometrial, esophageal, gastric, lung and nasopharyngeal cancer. Processed meat consumption might increase the risk of overall cancer mortality, NHL, bladder, breast, colorectal, esophageal, gastric, nasopharyngeal, oral cavity and oropharynx and prostate cancer. Dose-response analyses revealed that 100 g/d increment of red meat and 50 g/d increment of processed meat consumption were associated with 11%-51% and 8%-72% higher risk of multiple cancer outcomes, respectively, and seemed to be not correlated with any benefit."
Depression?
https://pubmed.ncbi.nlm.nih.gov/32937855/
"This systematic review was conducted according to the methods recommended by the Cochrane Collaboration and the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. Relevant papers published through March 2020 were identified by searching the electronic databases MEDLINE, Embase and Scopus. All analyses were conducted using ProMeta3 software. A critical appraisal was conducted. Finally, 17 studies met the inclusion criteria. The overall effect size (ES) of depression for red and processed meat intake was 1.08 [(95% CI = 1.04; 1.12), p-value < 0.001], based on 241,738 participants. The results from our meta-analysis showed a significant association between red and processed meat intake and risk of depression. "
Many conditions:
https://pubmed.ncbi.nlm.nih.gov/33648505/
"On average, participants who reported consuming meat regularly (three or more times per week) had more adverse health behaviours and characteristics than participants who consumed meat less regularly, and most of the positive associations observed for meat consumption and health risks were substantially attenuated after adjustment for body mass index (BMI).... Higher consumption of unprocessed red and processed meat combined was associated with higher risks of ischaemic heart disease (hazard ratio (HRs) per 70 g/day higher intake 1.15, 95% confidence intervals (CIs) 1.07-1.23), pneumonia (1.31, 1.18-1.44), diverticular disease (1.19, 1.11-1.28), colon polyps (1.10, 1.06-1.15), and diabetes (1.30, 1.20-1.42); results were similar for unprocessed red meat and processed meat intakes separately."
All of these are observational studies (or meta-analyses of observational studies), so come with the corresponding caveats. But there is a notable lack of evidence pointing the other direction, so I would not dismiss it out of hand, much less "emphatically".
> beyond the typical lifespan of an HG
No. HGs regularly live to their sixties and seventies, with only a small few making it to 80. That’s plenty of time for these to show up. Average lifespan is heavily weighted by disease and early mortality. And it doesn’t
Large scale diet observational studies are bunk IMO. And in terms of observational evidence the other way, there ... is some in the post below. Also shouldn’t it be kinda sus that red meat causes all bad conditions at the same time, according to correlations? That’s possible, but maybe it’s broader correlation? A bigger point is that nomadic herders also don’t have these despite eating herded meat
I'm not going to agree that all large scale observational studies of diet are bunk; but I think they all fail in isolating the effects of diet from other lifestyle choices. Nomadic herders and HGs are much more physically active than the typical Westerner, which probably trumps all other lifestyle factors, including diet. The observational studies try to 'adjust' for things like exercise etc but this is largely imperfect analytically and rarely reflects the type and amount of activity. It is notable that red meat, esp processed red meat, has very little evidence, observational or otherwise, of health benefits.
I’m not sure how significant exercise is relative to diet. Both important obviously. My guess is red meat, especially if it’s from a heritage variety that eats grass or something, is fine, and that since nobody eats it like that along with an otherwise healthy diet the population studies don’t notice. Not sure precisely how it plays out though.
Going by this LW post https://www.lesswrong.com/posts/PhXENjdXiHhsWGfQo/lifestyle-interventions-to-increase-longevity#Nutrition, *processed* meat is the main culprit.
key quote: "Processed meat consumption has the single largest negative effect on health. It is shockingly bad, even if you already suspected as such."
Reading this
It’s mostly quite good.
> CoQ10 for BP
Recent studies seem to agree that it helps. However, I’m still suspicious. Also,
> An important enzyme in this [coQ synthesis] pathway is HMG-CoA reductase, usually a target for intervention in cardiovascular complications. The "statin" family of cholesterol-reducing medications inhibits HMG-CoA reductase. One possible side effect of statins is decreased production of CoQ10, which may be connected to the development of myopathy and rhabdomyolysis. However, the role statin plays in CoQ deficiency is controversial. Although these drug reduce blood levels of CoQ, studies on the effects of muscle levels of CoQ are yet to come. CoQ supplementation also does not reduce side effects of statin medications
Wat? What if that’s the pathway for benefit? I seriously doubt it. Still weird in other ways...
> Flavonoids/anthocyanins
Don’t supplement this, just eat wild / heirloom variety fruits they have lots of dye
He’s right about the cholesterol story being funky
> Red meat intake was not associated with CHD (n=4 studies; relative risk per 100-g serving per day=1.00; 95% confidence interval, 0.81 to 1.23; P for heterogeneity=0.36) or diabetes mellitus (n=5; relative risk=1.16; 95% confidence interval, 0.92 to 1.46; P=0.25). Conversely, processed meat intake was associated with 42% higher risk of CHD (n=5; relative risk per 50-g serving per day=1.42; 95% confidence interval, 1.07 to 1.89; P=0.04) and 19% higher risk of diabetes mellitus (n=7; relative risk=1.19; 95% confidence interval, 1.11 to 1.27; P<0.001).
I’m somewhat concerned about what “processed meat” actually is - I certainly don’t eat any, to the point where I’m slightly nervous about the butcher grinding meat for me ... but I really have no clue what that means. Sliced turkey? Ham? Pepperoni? Salami? Fried chicken? If curing is processing, is smoked salmon bad? Canned tuna? And hunter gatherers certainly fermented meat and ... probably some salt preserved their meat? Celery powder? Drying? Is fried chicken not processed meat?
Also this is one of a hundred meta analyses. Do the others agree?
The medical errors study he references is wrong, but his advice there is very much right that lots of doctors are incompetent. Try to get a competent doctor and double check your symptoms and treatments with doctor friends (you’re here, smart, smart friends, can evaluate how competent they are). Not doing this will probably kill you. Doing so has saved several of my friends from likely death. Check everything!
I had to go two links deep to find the definition:
> “processed meat” was defined as any meat preserved by smoking, curing, or salting or addition of chemical preservatives, such as bacon, salami, sausages, hot dogs, or processed deli or luncheon meats, and excluding fish or eggs [24];
Yeah, it's so frustrating that people talk about "processed" like they know what it means. To me, cooking something or blending it sounds like "processing" but without any obvious reason this would be bad. And likewise people talk about "preservatives" being bad but why would that be *generally* true of *all* preservatives?
As far as I can tell all the above sort of process meat in common is ... salt, nitr[ai]tes, and (as a minority, bacon and sausage and hot dog don’t) drying. And maybe other stuff? That seems like a narrow series of causes for what seems to be a significant portion of America’s health problems. Which seems somewhat hard to believe but idk. Certainly salt preservation isn’t new, idk about nitrate
Unrelated
The Middle Ages made pâté a masterpiece: that which is, in the 21st century, merely spiced minced meat (or fish), baked in a terrine and eaten cold, was at that time composed of a dough envelope stuffed with varied meats and superbly decorated for ceremonial feasts. The first French recipe, written in verse by Gace de La Bigne, mentions in the same pâté three great partridges, six fat quail, and a dozen larks. Le Ménagier de Paris mentions pâtés of fish, game, young rabbit, fresh venison, beef, pigeons, mutton, veal, and pork, and even pâtés of lark, turtledove, cow, baby bird, goose, and hen. Bartolomeo Sacchi, called Platine, prefect of the Vatican Library, gives the recipe for a pâté of wild beasts: the flesh, after being boiled with salt and vinegar, was larded and placed inside an envelope of spiced fat, with a mélange of pepper, cinnamon and pounded lard; one studded the fat with cloves until it was entirely covered, then placed it inside a pâte.
In the 16th century, the most fashionable pâtés were of woodcock, au bec doré, chapon, beef tongue, cow feet, sheep feet, chicken, veal, and venison.[22] In the same era, Pierre Belon notes that the inhabitants of Crete and Chios lightly salted then oven-dried entire hares, sheep, and roe deer cut into pieces, and that in Turkey, cattle and sheep, cut and minced rouelles, salted then dried, were eaten on voyages with onions and no other preparation.[23]
Yummmm
It’s just confusing. “Celery powder” or stuff like that in cured meat ingredients is just a natural source of nitrate
I honestly would have no idea how to stop eating processed meats if I had a normal diet. Apparently hot dogs aren’t just sausages? Is sliced turkey “processed”? How does bacon have enough preservative in it to be comparable to salami? And why is smoked salmon, presumably preserved with the same chemicals, excluded - does one ha e to not eat that?
Based on the examples, I think poultry breasts and thighs, ground meat, and steaks all qualify as "unprocessed."
It's why I went to dig into the definition. If I got the definition wrong, well, then it's horrible advice (having to dig through two separate links to get a definition that turns out to be wrong).
I’m still not sure about ... sliced turkey breast in a plastic bag. I think it is processed, as it’s meat preserved by ... celery powder, which has nitrates as the preservation mechanism.
There were some studies of Adventists that found vegetarians had better health outcomes. Perhaps this is where we get the idea that animal products are bad. But I am not aware of any studies comparing healthy meat-eaters with healthy veg or vegans. Vegetarians seem to be a health-conscious group, so it is absolutely unfair to compare them with everybody else. Too many confounds.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4191896/
I seriously doubt this is true. Reason being: for our evolutionary history we have been eating animal foods as a part of our diet. All groups of hunter gatherers as well as agricultural cultures studied have eaten some meat and for most it’s a significant energy and diet contributor (or fish or other animal products but mostly meat). [3]Hunter gatherer populations have individuals who regularly live to 60 and 70 and somewhat rarely 80, [1] and despite their meat eating don’t show signs of chronic diseases like we have in modern life. [2]
1: https://content.csbs.utah.edu/~hawkes/Blurton%20Jones%20et%20al.%202002%20AJHB.pdf
2: https://www.nature.com/articles/1601353.pdf?origin=ppub https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/obr.12785
3: https://academic.oup.com/ajcn/article/71/3/682/4729121?ijkey=6536994d4056923cce454119710dee50e91ed841
What about the difference between the meat we eat, and the meat hunter gatherers eat? Are there any studies on the effects of eating only factory farmed meat vs. eating wild animal meat and/or meat from small-scale-produced domestic animals?
In terms of just number fat content, yes, HGs that eat fatty meats, such as marine mammals meat, so not show such chronic diseases, I think, as posted about last thread. I dunno beyond that.
As I said above too, I’d recommend small scale domestic animals, and especially not modern breeds and ones not fed commercial feed.
That's overly general. Some animal foods are good for us, e.g. fish and dairy; and it's hard to eat a healthy vegan diet (you need to know what you're doing). See, e.g. https://academic.oup.com/cardiovascres/advance-article/doi/10.1093/cvr/cvab173/6314360
I’m also pretty sure beef and pork and other mammals are totally fine. Existing studies tend to pack them in with every other aspect of the modern diet, and plenty of hunter gatherers are other large mammals. I’d still recommend eating high quality meat though, raised on grasses if grazer or a mix of farm food if something like a pig, as opposed to commercially farmed ones.
This reminds me of the xkcd comic about "if it were real, it'd be monetized by now." If veg*nism were better in any real capacity, then we would expect to see institutions and teams win using it as a competitive advantage. As far as I'm aware, this has never been true.
Historically, veg*nism is mostly limited to religious ascetic traditions as a form of self-denial or self-sacrifice - it has never been about "we can live longer, be healthier, be stronger," etc until very recently. There are no elite military units that avoided animal products.
"But gladiators were vegetarian!" Sure - they were slaves, and were fed cheap food that would fatten them up to make fighting more interesting. Not really what you'd pick as a counterexample here.
I don't know how expansively you've read in nutrition space, but I'd be ***super*** cautious about drawing any conclusion as firm and broad as "animal foods are bad for us". Everyone from the vegans to the carnivores can make a plausible, evidence-based argument for their position.
In general, the extremist position often goes too far. As far as I can tell, limiting intake of certain things down to a certain point (or increasing up to a certain point) is good, but limiting or increasing beyond those points doesn't have additional benefit.
It seems like everyone can find examples of “I switched to X and saw improvements in weight/cholesterol/etc.” I’d bet that the biggest confounder is being aware of what you’re eating. Most people don’t pay attention to what they’re putting in their bodies, so it makes sense to me that most diets offer a comparative advantage simply by forcing consumers to be conscious of their dietary decisions. Not that this is the only thing that matters, but I’d argue it’s 90% (or more) of the effect.
At this point, I believe there are quite a lot of people (certainly not a majority) who have tried one deliberate approach to eating after another. They're not just switching out from eating a lot of junk food and fast food. They've done keto, vegetarian, vegan, soylent, etc.
There's also the big selection bias in that these are a) people who wanted/needed to switch (vs people who already had something they were content with) and b) people who found the new choice substantially better (vs those who switched back or kept shopping). The other realities don't make as great endorsements.
Yeah. Also dietary restrictions mean you probably can’t eat the combo burger or the smoothie or whatever because it probably had a restricted element in ir, so you’ll eat less of that. Many many many other con founders
What's your ethnicity? Cattle herding societies, such as northern Europeans, East African, and South Asian have developed the gene to digest lactose (last I heard independently in the case of East Africans vs Indo-Europeans). East Asians and SE Asians don't have that gene, and have trouble digesting dairy products. Likewise, meso-American populations developed the gene that codes for the enzyme that can digest bean proteins. Thus they don't fart like many people European descent do after eating beans, and they can digest the bean proteins more efficiently than people of European descent. Likewise, northern Europeans seem to do better on diets with plenty of read meat.
FYI, if you're depending nutritional studies to guide you, it's worth noting that most nutritional studies haven't proven to amenable to being reproduced...
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4785474/
How interesting. I've come to the opposite conclusion, albeit using a far more anecdotal approach. I'm about three years into a ~ 90% carnivore diet of red meat, organs, eggs, cheese, butter, and tallow. Before this, I dabbled with vegetarianism, veganism, and run of the mill "Whole Foods paleo" style eating. At 34, I feel better than I ever have without much of a change in lifestyle besides the alternating diets; I've always been exceptionally active and sporty.
The other 10% of my diet is mostly fruits and the occasional breakfast cereal binge.
I started eating like this a few years ago after seeing how it positively affected two close friends of mine; they went from soft, slow, and tired, to impossibly strong, healthy, and active. They made a lot of changes though, including adding an exercise regimen to their lives.
Just thinking out loud, it seems odd to me to consider that over the course of eons we wouldn't have adapted to eating animal foods, and that evolution would have somehow plagued us with poor health as a consequence for eating what has (always?) been a significant part of our diet.
If you remove animal foods, what are you replacing them with? Most of the food in the grocery store is evolutionarily novel, which seems like it should be largely less than ideal from a health perspective.
I eat whatever the fuck I want and as much as I want (little Caesars and taco bell are staples) and never feel tired. But I get ~1-2 hours of intense exercise every day. And I feel much better now than this time last year, when I was eating a no sugar, home-made healthy diet of whole foods but not exercising nearly as much (broke up with girlfriend). My point is that adding an exercise routine is such a confound here, it obliterates the significance of diet in my view
Doesn't that get extremely expensive? If you don't mind me asking, what is a typical week's groceries for you? I eat pretty clean with generally tofu, chicken, potatoes, milk, frozen green veggies, fresh fruit making up my diet, and my impression is that pushing into more protein and more direct animal protein gets very pricey and hard to store with once a week shopping very quickly
It does get expensive, but I consider it as an investment that will save me a lot of heartache and money down the road. I've done a cow share before with friends, which significantly reduced the cost per unit, and I don't often buy the more expensive cuts of meat. I buy frozen berries more often than fresh. I am under no illusion that such a diet is accessible to most people and I feel very fortunate in my position.
I thought this had been settled long ago. I suspect posters here tend to be older because the young among us are going enthusiastically to vegetarianism. I am far from young myself. Veg for almost fifty of my 79 years, but I am the picture of health. A yoga practice for all that time helped, I am sure. As for the young, my six kids, ranging from 55 to 35 years of age are also lifelong veg and equally as healthful. We take no fancy supplements. A varied diet is all that is needed. There are many reasons to embrace a vegetarian and (and especially vegan) diet, way too numerous to go into here. They fall into the categories of health, ethical, environmental, spiritual, financial and moral.
Several ongoing studies of 7th Day adventists, beginning in the 20th century, should convince anyone. (one large cohort eats meat. The other does not. All are of the same racial background.) Conclision: "Vegetarian diets in AHS-2 are associated with lower BMI values, lower prevalence of hypertension, lower prevalence of the metabolic syndrome, lower prevalence and incidence of diabetes mellitus, and lower all-cause mortality."
You're also overlooking the fact that your average vegetarian is far more likely to engage in other healthy habits than your average non vegetarian, such as exercise, not smoking, and having a higher SES.
I have a hard time even making sense of the idea that a vegetarian, and especially a vegan, diet could possibly be a reliably healthy way to eat long term. Not a single indigenous society in history has voluntarily adopted a vegan lifestyle. The remaining indigenous societies we have today universally prize animal foods above all else. Why would evolution select for vegetarianism throughout a long history of eating meat whenever possible?
The argument is that it's similar to the situation for sugar. Why does evolution select for little kids to loooove sugar so much that, if left to their own devices, they would subsist entirely (for as long as they lived) on candy and ice cream? The conventional answer is that: in the natural environment, plain sugar is exceedingly rare, but also a valuable source of concentrated calories, so there is no harm and some benefit in making Australopithecines love it enough to go to the trouble of carrying a handful of apples with them when they stumble across an apple tree once in a blue moon. They can't possible get too much, and prodding them to get a little more (especially the kids) is adaptive.
The argument is, the same thing applies to meat. In the primitive state, where you only get fresh meat by hunting down something with hooves and horns that runs about 3x faster than you, beating out any more fearsome carnivores in the vicinity, and successfully killing it with your bare hands, or a sharp stick, without being killed (or badly wounded) yourself. This is most likely a pretty damn rare occurence. So if getting meat *at all* is something your Australopithecine can count on *maybe* once a week in the dry season, there is no harm and considerable good for evolution to maximize the t aste for the stuff. Like the sugar, you can't possible get too much, and a taste that encourages you to put the extra effort into chasing the cute baby antelope with the broken leg is adaptive.
In both cases, the argument goes, evolution did *not* design us to cope well with a strange world of plenty verging on excess, where we *can* eat ice cream or steak for every meal of every day if we want.
I'm not saying the argument is correct, but it's not stupid or obviously wrong.
"All are of the same racial background" does not describe AHS-2 (though it looks like it describes the AHS-1, which started in the 70s).
https://adventisthealthstudy.org/studies/AHS-2
> 65.3% are white (non-Hispanic) 26.9% are Black/African American.
Very similar situation at the same age. Only berries for fruit, to keep the carbs low, but more or less all leafy greens, eggs and cream, and meat.
Not to be too uppity, but what always strikes me about the diet is how full of energy I feel compared to everyone else. I know it’s hip to be tired all the time , especially in the doomer academic crowd I’m in, but I honestly just never get tired since I’ve removed carbs.
That's a very gout-prone diet, especially as you get old.
Then they're setting themself up for fame and fortune!
I've been vegan for almost 20 years (mainly for animal-rights reasons). My bloodwork has been excellent, much better than it was when I was omni. My health is pretty good, and I also feel great. I have no regrets, and I've converted some friends and family members to veganism. They are all doing well on it.
Veganism is a cult. Vegetarian is a way of eating. It serves some people well, and others quite poorly. Carnivore is also a way of eating. It serves some people well, and others quite poorly. People are different, go figure!
Warning that the first sentence here is the kind of thing that will get you banned if you repeat it too often. It's insulting and not backed by any argument.
I thought that was generally agreed-upon. Would it help if I backed it with an argument? E.g. cults require you to wear special clothing, and vegans are required to avoid animal products in their clothing.
> cults require you to wear special clothing, and vegans are required to avoid animal products in their clothing
Wat. That is like claiming that mother expecting from children to wear hats during winter is a part of a cult.
Clothes without animal products are definitely not special enough to be a serious cult indicator.
This is no more cult-like than the claim that laws banning public nudity are indications that a city has been taken over by a cult.
Vegetarianism is avoiding animal flesh and veganism is avoiding animal products. That's all.
Worth mentioning is that bloodwork, while far more objective than personal sentiments, do not enjoy universal agreement from the scientific community as far as markers for good health. My LDL is substantially higher now (109) than it was before going carnivore, but my HDL is great (67) and my triglycerides are fantastic (18). Carnivore evangelists like myself will say this is fine, but I think the conventional view would suggest that my LDL levels are worrisome, despite my exceptionally high degree of physical fitness.
Everyone’s confused about LDL. But I’m pretty sure the hundred million people who stop eating milk and eggs and meat because “cholesterol” are making serious mistakes. Although it might work anyway just by being a general dietary restriction, it’s still probably bad
Your brain is made of cholesterol. If you don't have enough, your body makes more.
To clarify, I don’t think that those people are not getting enough cholesterol if they stop eating meat or dairy. But that stopping eating meat or dairy isn’t a good idea for their health goals (heart disease, hypertension, whatever), and that eating meat is better for health for a myriad of other reasons.
If you don’t mind, what sort of diet change?
Funny, I gained twenty pounds for the two years that tried to be a vegan and my cholesterol counts skyrocketed. I suspect it depends somewhat on your genes...
Vegetarian for 10 years. Worthlessly hard diet to maintain health with that only became apartment after 10 years post return to 80% KETOISH diet with grass fed meats and wild meats. Low carb. Intermittent fast. Health indicators across the board better. Blood work, endurance, muscle build, mental health, sex drive, etc. Confounding but i won't trade my current diet for vegetarian again.
> bad for us
Bad compared to what? If you compare a typical animal-based diet to the healthiest possible vegan diet (including supplements for things like creatine), sure, the animal-based diet will come out looking bad. But if you compare the healthiest animal-based diet to a vegan diet full of industrially processed food products, the animal-based diet will come out looking pretty good. And of course there are individual differences between us all.
On what basis did you conclude this?
There is the famous "China Study" which claimed from population analyses that red meats are bad. And many other studies too (for instance https://link.springer.com/article/10.1007/s10552-021-01478-2) . But confounders abound.. and I still I think stuff like 90% grass fed beef consumed in moderation (su vide cooked ideally, not grilled/charred) or salmon is healthy.
Animal foods eg. meat, milk, cheese, etc? Bad, how bad?
> Or would it go the other way and the Black Hole would just evaporate into Hawking radiation before anything happens to the rest of Earth?
Yes, https://www.wolframalpha.com/input/?i=black+hole+lifetime&assumption=%7B%22FS%22%7D+-%3E+%7B%7B%22BlackHoleLifetime%22%2C+%22t%22%7D%7D&assumption=%7B%22F%22%2C+%22BlackHoleLifetime%22%2C+%22M%22%7D+-%3E%2280+kg%22
Even if it didn't evaporate, a 80 kg black hole would have a temperature of well over 10^12 degrees - nothing will get near anything that hot.
Black hole information loss and the fate of evaporated black holes are both open questions. If the former answer is "it happens" and the latter answer is "they stick around but immediately re-radiate anything that falls in" then an "evaporated" black hole is still a matter-to-energy converter (via absorbing matter and re-radiating it as a matter/antimatter/photon mixture), which is probably not a good idea to put inside the Earth (though plausibly it could have a low enough eating rate to not be a problem).
LHC fears were definitely overblown, due to the "cosmic rays are more potent and we're still here" thing, but if we get up to actually-unnatural-on-Earth collision energies it's probably a good idea to do it in space whether or not our current theories predict doom (we do these experiments to test them, after all - they could be wrong!).
The Schwarzschild radius of a 80 kg black hole is 1e-25 metres. Even assuming it would just absorb anything coming within 100 Schwarzschild radii of it (which is ridiculous -- black holes aren't vacuum cleaners), a cylinder of 1e-23 m radius going all through the Earth has a 4e-39 m³ volume, so 2e-32 grams at the average Earth density -- IOW you'd have to be very lucky to even catch one electron.
> I'm also now wondering why some people freaked out about the possibilities of LHC creating Black Holes
They were trolling (or had no idea what they were talking about, or both).
I seem to recall certain candidate theories predict no Hawking radiation, but they also predict no way for the LHC to create black holes, so to predict a non-evaporating black hole at LHC you need to mix-and-match theories without paying much attention whether they're even consistent with each other.
Furthermore, assuming Lorentz invariance, the LHC can't possibly do anything that ultra-high-energy cosmic rays don't already do every day, so you'd also have to assume Lorentz invariance violation.
All in all, those people were about as reasonable as if in 1968 they had freaked out about the possibility of reading the bible in lunar orbit crashing Uriel's machine to keep the world bound to mathematical laws.
An upper bound to the amount of mass the black hole could absorb is the mass that could reach the black hole within the hole's lifetime if that mass were instantly accelerated to the speed of light towards the hole. I.e. a sphere with a radius of the hole's lifetime times the speed of light.
43 light picoseconds is 1.29 cm. A sphere with that radius has a volume of 5.2 ml, so if your black hole is formed inside a body of water it will absorb less than 5.2 grams of matter before evaporating. Even if it's surrounded by Osmium (the densest naturally occurring material), the speed-of-light ceiling on absorbed mass would be 115g. Which is still a rounding error in our original guesstimate of an 80kg initial mass for our black hole.
Also, this upper bound is a stupendous overestimate: 80 kg of mass is nowhere near enough to nearly-instantaneously accelerate material half an inch away to anywhere near the speed of light.
Is there a threshold that exists within the Earth where matter can enter the blackhole faster than it evaporates? Or does it take something like a neutron star to do that?
It looks like the threshold density for that would be around 15 kg/ml. That's a lot denser than the cores of active stars, but a couple orders of magnitude less dense than the electron-degenerate matter that makes up white dwarfs (about 1,000 kg/ml). The neutron-degenerate matter that makes us neutron stars is several orders of magnitude more dense still (10^11 kg/ml).
I think the "vacuum cleaner" issue was that the outer core is a liquid under high pressure, so if a black hole lasts long enough to absorb just one more particle, another particle flows in to replace it extremely rapidly.
(But it'll take at least a minute to fall through the mantle, and I've learned here how massive a black hole has to be to last a minute.)
1 kilotonne (1 million kg) is probably around what you're looking for, as according to the formula that should last 84.1 seconds. 100 kilotonnes lasts 2.7 years.
(Lifetime scales with mass^3, so scaling by orders of magnitude either way from those is fairly mathematically simple.)
I'm fascinated that something so illegal has a clearnet website, with a phone support line. How do you evade authorities? Isn't the risk with such enterprise enormous?
I wonder if it’s just a scam and they just take your money and don’t send fake bills. Law enforcement has to prioritize and it takes so much paperwork to take down a website (that they’ll immediately re register) or manpower and investigations to track down who’s shipping it that it may take a while to catch even if you do get investigated. And yeah they can just be in Russia or something.
Also, people scammed likely will not report problem to police and other authorities.
I'm guessing the actual operators are based out of a country where crimes against Americans/the American state committed via internet aren't prioritized.