I don't understand pedestrian crossing lights. Why do you have to push a button to make them work? As far as I can tell, it doesn't extend your crossing time, which makes me think it's about conserving energy: why make the pedestrian crossing lights work when there are no pedestrians? Except that, when there are no pedestrians, the light stays "Don't Walk". Does that somehow conserve energy?
Pushing a pedestrian crossing button to get the sign to work seems stupid, but I'm going to take Chesterton's advice and assume there must be or have been a good reason for them. But what is it?
Many intersections only rarely have pedestrians, and halting traffic for 30-40 seconds every few minutes to let imaginary pedestrians cross is inefficient. Same reason some intersections with little vehicular cross-traffic will default to the main route staying green until a sensor detects a car on the crossing road, except it's harder to make a reliable automated sensor for a thing that isn't a metric ton of steel.
Well in the Texas town where I live, the pedestrian WALK sign doesn't halt vehicular traffic. It just turns the WALK sign on when the parallel traffic has a green light. So it doesn't protect you from cars turning right on a red light or oncoming traffic turning left. As a result, pedestrians get hit all the time by traffic while crossing with the WALK sign on. I, myself, jaywalk whenever possible, because it is much safer than trusting that a car won't turn into you at a corner.
It's plausible the WALK sign makes the green light in your direction an imperceptible-to-humans fraction of a second longer, but, being human, I can't detect if that's the case.
Most - but definitely not all - drivers in Saint Paul will yield to a pedestrian at a crossing. It gets weird at times. I have to cross a busy street without a light on my way to and from the corner grocery store. If I so much as glance to the other side of the street when I’m at a crosswalk, cars will slow to a stop for me.
I’m fine with waiting for a break in traffic but some hyper polite drivers won’t even respond to a waved arm ‘Go ahead I’m not in any hurry’ gesture.
Pete Davidson canceled his trip into space? I don’t think he realizes how a thing that could make him more interesting to women. Why his romantic life would probably just take right off. No more lonely nights with something like that on his CV!
Zvi in his most recent Covidpost mentioned that he couldn't find a video because it was pulled from YouTube. I found the video by putting the YouTube URL into the Wayback Machine - I didn't actually think the Wayback Machine worked on videos, but apparently it does.
Posted in case anyone here read said Covidpost and is interested in the video, or in case someone here can get it to Zvi/is Zvi (I don't know any way to tell Zvi things other than making a commenter account on one of his blogs), or in case anyone here doesn't know that the Wayback Machine works on YouTube (which is a pretty big deal considering how much stuff YouTube burns).
As a guy who is getting old and no longer in the market, thought I would offer some dating advice since I see guys here sometimes asking for it.
One thing I learned over the course of decades is that women reject you for two main reasons: because you are too in a hurry to get laid or because you are too in a hurry to have a serious relationship. It's easy to mistake a type 1 rejection for a type 2 rejection and vice versa. Maybe that seems obvious, but it didn't seem obvious to me when i was younger so I doubt it seems obvious to every young guy reading this.
An important take-away, I think, is to realize that when a women rejects you it is often for the opposite reason that you imagine. You could learn this from reading Proust, but I'm going to try to keep this shorter than Proust did. (Proust may have been gay, but he understood romantic relationships and sex better than most, straight gay or otherwise.)
If you think a woman rejects you because you aren't attractive enough, that probably isn't the reason. It's more likely because you seem either too interested in getting laid or too interesting in having a long term relationship. Meaning, if you get rejected often, you should change your strategy. If you are trying to get laid, don't. Work on signaling that you are interested in a relationship. OTOH, if you are mainly interested in a relationship, don't. Just try to get laid.
Either way, women are going to figure out what you are really interested in pretty quickly, so don't worry about sending the wrong signals. Do everything you can to counter-signal, because that will send a more balanced signal in the short run. Sending a balanced signal is what most women find attractive.
EDIT: And don't make the mistake of thinking "but THIS WOMAN I am interested in isn't MOST WOMEN". You can't read minds.
I really don't think it is. While men rate women 1-10, women are mjch more pass-fail regarding men. And there's a low bar to passing: don't be smaller than the woman, smell OK. Even those criteria will be waived if you're funny as fuck.
It also helps to be/appear deeply interested it something other than sex or relationship. Something she can relate to, e.g. some hobby you have in common.
Today's post-secondary institutions are expensive and backward. How can we do better?
Idea for accreditation system based on accrediting students rather than institutions:
1. An accreditation company (nonprofit foundation? PBC?) produces standardized tests to measure student knowledge & abilities. Tests are broken down into a set of mini-tests, and each mini-test tests a small amount of knowledge and ability. The company charges money to an institution whose students take a test, or to a person who takes a test independently, and revenues are used to produce more tests (and to prepare defenses against cheating). Students earn diplomas from the company according to some set of rules to be determined. Much like TripleByte, the accreditation company earns reputation by verifying ability correctly, and it can increase prices as its reputation increases (but if it's a nonprofit or PBC, prices should hopefully not rise without limit.) IMO students should take tests on the same topic twice, at least 8 months apart (to verify knowledge retention and discourage cram-based learning), but that's not my call to make.
2. Educational institutions (e.g. MOOCs) teach students, who pay tuition as usual. The institution chooses what (and how) to teach in each of its courses, and at regular intervals, offers a test from the accreditation company. A test will typically be composed of two to twenty mini-tests chosen according to the material that was taught in the course. The institution earns revenue equal to the difference between tuition fees and test costs. The accreditation company tracks which mini-tests each student has passed; if a student moves between institutions, the gaps and overlaps between courses at the two institutions are tracked exactly. And of course, some people (Sal Kahn?) will offer completely free courses.
This type of system bypasses traditional accreditation boards run by encumbent institutions, which have an incentive to avoid accrediting new entrants. Thus it should produce a competitive online market with low prices, while still giving students a meaningful diploma that they can tout to employers.
Now, surely I'm not the first to think of this, so why hasn't this kind of system become popular?
"Now, surely I'm not the first to think of this, so why hasn't this kind of system become popular?"
They do exist, so I think the question is "why aren't they more popular?" and the short answer is that because if the accreditation is to mean anything and not be a diploma mill, you need some way of checking that the course material is good, the students are qualified, and the diplomas or certificates mean something - and that means you end up re-inventing colleges in some form. E.g. how can you be sure QuadrupleBit graduates are as qualified (this is distinct from capable or clever) as IvyWreathed U graduates? One way is to compare coursework and results. But if both institutions have different forms of assessment and coursework? Well, get QB grads to sit a final exam.
That means you need questions for the exam, a curriculum to cover exam topics, and a place to sit the exam where you can be sure that the QB lot are not all cheating and just have Google open on their computer at home to give them the answers.
Congratulations, you have re-invented the exam hall. And if you need a place to host it, the simplest answer is to get QuadrupleBit, the online certifying company, to hire or rent or find someplace to hold that exam. And since they need this place on a permanent basis and for as many final exams as they're running throughout the year, they may as well have a building for their own use.
And eventually 'online only' becomes 'well we have all these offices and now classroom spaces as well' and it's a new college.
How do you develop a reputation as an accreditation company in the first place?
Universities have history, your QuadrupleByte has nothing. At best you're rated as equivalent to that shitty bootcamp someone's running in the city, whose alumni cannot program their way out of a wet paper bag.
You want is an institution that is a Schelling point for intellectual elites, and does some basic filtering to keep that Schelling point stable. If it is known that clever kids finish CS at $EXPENSIVE_COLLEGE, companies will hire from $EXPENSIVE_COLLEGE. The actual quality of courses at $EXPENSIVE_COLLEGE is an afterthought.
Speaking from my own experience as someone who got a Bachelor's in Computer Science, and then went into Software Engineering - conceivably the very sort of person you're trying to reach - I think that the big missing element is projects.
I credit a great deal of my success in industry to the project-oriented curriculum at the institution I attended, and particularly the class which brought together groups of fifteen people for seven weeks for the closest I got to an actual Software Engineering experience in school. They introduced me to ideas and practices around teamwork and source control which I suppose I could recite, but which would be very difficult to articulate in ways that could convince an observer that I had gotten them in a brief period of time. And if perhaps I could, the ability to verify in a test environment that I had these skills would, I think, hinge on my communication skills to an undue degree - while communication skills are important in my field, I do think this would put more weight on them than is warranted.
It's quite possible that schools do a poor job of actually verifying that a given student has learned the things that a given project is meant to teach. Perhaps they are simply going off some strong prior - whether it's the latest in education research, the dead reckoning of their most experienced faculty, or the dean's latest interpretation of their star signs (though hopefully accreditation puts some kind of lower bound on how bad that methodology can be) - that tells them that given a project which roughly fulfills these specifications, the students should have learned these skills, and if that's true of only 90% of the students and even those learn on average 90% of the skills, there's the opportunity to instill it with further overlap and maybe we'll catch it before they graduate, if they missed out on a foundational skill in their first year that a fourth-year project depends on in the same way that it depends on the students being functionally literate in the university's language.
But I don't think there's a way to identify the kind of learning we want to happen from projects in an environment where the assessments and the teaching are so decoupled. Tests can be fine for getting a student to apply specific knowledge in a narrow sandbox, but they can't really capture how well a student works at something - over days, over weeks, consulting resources (because consulting resources is a skill, which may well be more important than what most given resources have to teach; as a Software Engineer much of my job relies not on me knowing what needs to be done before but on being able to find, interpret, and apply the relevant sources).
I think this gets much worse for fields where there is very definitely not a singular concise correct answer at the end of an equation or as the result of a suite of automated tests. Which is a shame, because one big thing we want from knowledge workers is the ability to work somewhat autonomously, on broad problems with ill-defined endpoints, often collaboratively, and in novel situations. The stuff that doesn't fit this description - well, that'll get snapped up by automation sooner than later.
You'd be turning universities into teach-to-the-test cram schools.
Most of what you'd expect to learn at the university level can't be adequately tested in an examination, and for the stuff that _is_, it would be reasonably easy to cram (e.g. there's only, like, twelve possible classes of exam question for Special Relativity so let's just study them all).
Since universities already have accreditation, I would expect them to reject a new system completely.
But when I was going to my university Engineering program, I crammed a lot because my teachers were so bad that I felt that *actually understanding* the course material was out of reach for me. This is completely different than my high-school experience, in which I *never* crammed.
Uncharitable of you to ignore the anti-cramming measure that I proposed. Cramming is a short-term trick; if you have two tests spaced >8 months apart, you are certainly better off learning the material properly.
It is impossible to "learn the material" for the long term if it has no day-to-day practical use. The material would need to be integrated into practical projects executed during that 8 month interval.
Not sure why you think cramming twice is "better" than learning the material properly. I disagree about the impossibility of learning things without day-to-day practical use; I learned such things throughout K-12 without cramming.
Current education/accreditation system is a mess of all kinds of signals. It is difficult to improve, because sometimes it is supposed to suck.
For example, imagine that you create a parallel educational system that is neither better nor worse than the traditional one, but it is 10x cheaper. Would it be popular? No, because using the new system would signal that you are poor, and so are most people you know. Rich people would avoid it; and they would also have an incentive to pretend that the new system is worse.
Or imagine a new educational system that is just as good, only 10x less frustrating for students. Then employers would avoid hiring people from that system, because they would suspect that such employees would have low frustration tolerance and would soon quit after the normal amount of abuse at workplace.
Or a new system where kids learn more easily because somehow all lessons are magically easy to understand? The employers would suspect that the kids are actually less smart then they seem, and will fail when facing a novel situation.
In other words, trying to make education better is like trying to make a marathon shorter -- the people who usually run marathons will reject the idea. Education is inefficient, frustrating, and gives unfair advantage to rich people... which is exactly the point. It prepares you perfectly for your future workplace.
(If you are interested in knowledge for knowledge's sake, then of course, Khan Academy is the way.)
> No, because using the new system would signal that you are poor
I'm pretty sure most employers aren't judging you based on how rich your parents are, with some obvious exceptions (Harvard) in which wealth isn't the only thing being signaled.
> employers would avoid hiring people from that system, because they would suspect that such employees would have low frustration tolerance
As a (very) small-fry CTO myself, I want knowledge and skill and more than I want frustration tolerance. I guess some companies want that, but other companies want other things. (and why would poor people have worse frustration tolerance?)
"(and why would poor people have worse frustration tolerance?)"
It's not that poor people would have worse frustration tolerance, it's that (whether rich or poor) coming out of an education system where learning was effortless, the teachers wee excellent, it was easy to learn, everything was ready for you when you were ready to take the next step, etc. is not good preparation for a workplace where it's "dunno the answer to that, the guy who does know the answer is out for two weeks so you have to wait that long, there is paperwork with the answer on it someplace but you'll have to search for it and nobody knows where to start, and there isn't a clear-cut answer at the end, just a 'good enough' and besides the boss/boss's boss/client is going to change their mind three times about what they want anyway".
The elephant in the room is that if your parents are rich they're probably higher in IQ and/or cultural capital, which makes you higher in IQ and/or cultural capital, both of which are hugely desirable traits in new hires.
(Ah, please take the previous comment with a grain of salt; it was exaggerated for artistic purposes. This comment reflects my beliefs literally.)
There are two ways how wealth impacts knowledge/skills that come to my mind immediately (and there are probably more):
First, rich people can spend more money on education-related expenses, and I think they have more free time on average (no need to keep two jobs; can save some time by spending money on something). So if we imagine two kids with equal intelligence, talent, chatacter traits, and hobbies; learning e.g. computer science in exactly the same classroom with the same teacher using the same curriculum; but one comes from a poor family and the other comes from a rich family, I would still expect different results, given the following:
The rich kid will have a better computer at home, more time to use it, no problem with paying for a course or a tutor if necessary. The poor kid will feel lucky to have a computer at all, will be limited to free resources, and will probably spend some time helping their family (working, taking care of younger siblings). -- Therefore, at the end of the year I would expect the rich kid to know more, on average.
Second, when people think about the quality of school, they usually think about teachers, curriculum, didactic tools, and whatever... but a crucial and often ignored factor is classmates. Kids inspire each other and learn from each other a lot. Or they can prevent each other from learning by disrupting the lessons. Your ability to choose a school with better classmates is often limited by money. With the same curriculum and same quality of teachers, I would expect rich kids to get better outcome at school, simply because they are not surrounded by classmates from dysfunctional families etc.
They differences may seem small, but they add together, and their effects compound. With exactly the same curriculum, I would expect rich kids to get better results, on average, even if we control for intelligence and other traits.
Therefore, from the company perspective, a rich kid seems a better bet, ceteris paribus. (The only disadvantage is that the rich kid will probably expect a higher salary.) Therefore, if you are - or pretend to be - a rich kid, you probably do not want to signal that you attended the school for poor kids... or the cheap school.
Re: frustration tolerance -- you want some basic level of it, like people who won't give up and quit after the first problem.
And speaking as someone who has come out of the 'practical training, non-university, on a ladder of accreditation' route and worked with people who went the university route, there is a difference. Not even so much in practical skills, depending what course they did the university people can be as good and up on those, but the entire shape of the learning experience, the environment, the content.
The practical-oriented really was teaching to the exam, telling you what you needed to know to do the tasks, but nothing extra. It was to get you qualified and out into some kind of paying job as fast as possible. If you wanted extra, you could then go up the ladder to the university. People from the university just had an entirely different experience, and yes they did indeed seem to have a more rounded education, a better understanding of the subject, and just that familiarity with how academia works that is hard to put into words but you know it when you don't have it and are trying to engage on the academic level.
Hey, you seem to know your shit. Could really use an impartial set of eyes on something...would you be willing to read a few pages and give an opinion?
It seems like this sort of info about a student's mastery and retention of material would be taken seriously by grad programs or employers in STEM fields -- have my doubts about other fields though.
The step you're missing is to make big companies accept your accreditation as a valid measure of a prospective employee's worth. Companies are often at least as interested in proof of conscientiousness as in intelligence, and often not at all interested in subject matter mastery.
At higher levels, the top institutions sell networking as much as anything else, and that's a very sticky equilibrium - one of the main benefit of a Harvard grad is that they know other Harvard grads, and that's valuable because all the top places hire Harvard grads, so their network makes them valuable enough for all the top places to want to hire them
Yes, and this is why I suggested that the price of the service would be correlated to its reputation: big companies accepting it = reputation. I guess it would need somebody with deep pockets to help it survive the early low-reputation phase.
Obviously, my proposal is not intended for people who have the means to make it into (and through) Harvard. I went to an ordinary university, and the amount of networking I did there was basically zero. (edit: except the internship program - the job I got out of that lasted 7 years.)
What's the textbook example of how to tactically use tanks in warfare? I'm thinking a WW2 battle or something were tanks saved the day, did lots of the special tanks things that can't be done by artillery or motorized infantry or whatever, and then some colonel wrote the book on it.
I'm hearing a lot from armchair generals on how to not use tanks ("Don't use them against other tanks", "Don't use them in urban areas", "Don't use them without infantry support" etc.), but I don't hear much about how they are supposed to be used, thus my question.
If you want to ague that tanks are obsolete in modern warfare this is your spot as well I guess.
If your enemy has a tank (or worse, a bunch of them) somewhere, and you don't have anti-tank forces nearby, then the tank can destroy whatever stuff you put through that area, unless they are well-hidden or well-fortified. And it's hard to move while being well-hidden or well-fortified, so if the tank got there first you are outta luck. Of course, you are aware of that and will not move into that area, but that means you can't have anything there.
That makes tanks basically movable walls (the size of the wall is the range of the tank of course, not the physical size of the tank). You put them where you don't want the other guy to go through - an important version of that is to cut off a force from their rear lines and force them to surrender.
The reason they are good at that is that they are armored pretty well and can fire on the move, which makes anti-tank platforms far more limited than say anti-artillery platforms (it's easier to sneak up on a self-propelled gun or destroy it after it fires, than to do the same on a tank).
First, a tank is a km-length wall that can move operationally at speeds of tens of km/h. A kilometer of wall is a lot of wall to place, or to move to a more relevant location when needed (and you want to fill in or exploit breaches quickly, which makes moving to relevant locations very important).
Second, its much harder to destroy a tank (that is maneuvering through tank-friendly terrain, not that is driving through hostile city streets) than to breach a wall, because the tank can shoot at you then move into hiding and possibly call for assistance, and a concrete wall can't.
Depends a lot on the period. You can get a good spiel of the WW1 retrospective & pre-WW2 expectations (which held up at least for early WW2) by reading "Achtung - Panzer" by Guderian, but the key doctrinal takeaway I remember would be:
-Use them in large concentrations (i.e: not in piecemeal), with motorized infantry to support and exploit their breakthrough (and absolutely not "supported" by leg infantry, which can't match their speed, and slow them down to get pummeled by artillery fire, as in WW1, or WW2 french doctrine).
-Best way to stop them is another tank (that was before air support got precise enough to pose a threat)
Supposedly De Gaulle reached similar conclusions in "Vers l'armée de métier" (or was it "La France et son armée"?), but I haven't got around to read it yet, so I can't comment.
What you're really asking for is how to do combined-arms warfare. There are very few military problems where the solution is "send tanks, just tanks", but many where tanks are a useful or vital part of the solution.
The classic use case for tanks, and *maybe* justifying a pure-tank force, is exploiting a breakthrough in mobile warfare. Not making the breakthrough itself; you'll almost certainly want artillery and infantry to help with that. But if you've broken through the enemy's defenses, you want to move fast and break things behind the lines before they can offer a coherent response, which will involve meeting engagements with elements of their incoherent response, and that's something tanks are really good at.
And for much the same reason, tanks are good at mounting rapid counterattacks in the face of an enemy breakthrough.
There are some good recent-ish examples (but not pure-tank) in operation Desert Storm, and in the Arab-Israeli wars.
Sure, I get that you want to do combined arms warfare. But what is the role of tanks in combined arms warfare?
Why are they better than e.g. mechanized infantry at exploiting breakthroughs? Are they better armored so that they have an easier time driving past pockets of resistance? (If so, can't we just slap more armor on an APC to achieve something similar?)
What does exploiting a breakthrough actually entail in practice? Do you hunt down enemy artillery and C&C? Do you try to get the enemy logistics (by firing randomly at trucks)? Do you attack the enemy in the rear? Do you just drive as fast as you can against Berlin/Bagdad and hope to create as much chaos as possible on the way? I guess tanks need (mechanized) infantry to create a famous WW2/style pocket, but it's good to have the tanks in the front while doing so?
The Iraq tank battles just seem to be coalition tanks driving through and obliterating technologically inferior Iraqy tanks. Couldn't this have been done by infantry or air power? Valley of Tears looks interesting but the Wikipedia article is hard to parse, I'll look into it.
Mechanized or even motorized infantry can support breakthrough operations, and in some cases (e.g. when the enemy only has leg infantry), can do the whole thing.
But infantry, even mechanized, has to dismount to fight. Otherwise it's not infantry, it's just an inferior form of armor handicapped by having to haul a bunch of useless people around - and no, their shooting assault rifles out of firing ports isn't useful enough to be worth the bother.
And dismounting, fighting even a skirmish at a walking pace, and then remounting, takes time and costs tempo. Sometimes it's necessary, e.g. to clear enemy infantry blocking your advance in close terrain, but if at all possible in a breakthrough operation you want to defeat at least minor blocking forces on the move. For that, you want tanks.
As for what you do in a breakthrough, part of it trying to overrun C3I facilities, logistics nodes (not random supply dumps, but supply depots, railheads, critical road junctions etc), and artillery. I'd put it in that order of importance, but it's debatable. The other part of it is maneuvering to block enemy lines of retreat and reinforcement.
And pretty much all of it, is creating in the enemy's front-line troops and their leaders the firm perception that they don't know what the hell is going on in their rear, but it's really bad and if they don't run away *right now* they'll never have the chance. Then your breakthrough forces can ambush and kill them as they flee.
As for using infantry to "drive through and obliterate technologically inferior tanks", no. Literally no, because infantry doesn't drive, it walks. And if you're thinking they're going to drive through in their technologically superior infantry fighting vehicles, maybe but now the "infantry" isn't doing anything and even technologically inferior tanks are carrying bigger guns with longer range and heavier armor because the aren't carrying around useless infantry. Maybe you've got enough of a technological edge for that, but it's still fighting with a handicap.
If you're talking about infantry advancing against tanks on foot, no, infantry can't advance against fire like that. First, because infantry survives under fire by emulating hobbits - small, nigh-invisible, and living in holes in the ground. And second, because infantry can't fire on the move.
Even a "technologically inferior" tanks is in this context a machine gun on a gyrostabilized mount with a magnifying and if needed night-vision optical sight and nigh-infinite ammunition, with a gunner who is basically immune to suppressive fire, and capable of firing accurately while retreating faster than infantry can advance. Think first-person shooter video game - completely unrealistic at duplicating the experience of a *soldier* in combat, but pretty good for a tank gunner. Oh, and he has a big-ass cannon if he needs it.
The infantryman, is a very not-machine-gun-proof target whose attempt to advance largely voids his concealment and subterranean-ness, and sure maybe he's carrying a high-tech missile that could destroy tanks if he weren't trying to advance, but since he is it's just ballast slowing him down.
If the enemy brought *only* tanks, and parked them too close to a town or treeline or whatnot, you could imagine your infantry sneakily infiltrating into firing positions. But the enemy probably has some infantry of his own, and he's got that deployed to cover all those firing positions you were trying to sneak into. His infantry doesn't have to defeat your infantry, it just has to force it to reveal itself prematurely so that the enemy can put heavy firepower from his tanks - or better yet artillery - onto it.
Wait, that first line makes no sense to me, I thought (a subset of) tanks are explicitly designed to be used against other tanks?
More generally, I think tanks in open terrain beat infantry without tanks; combined arms is generally very important, of course, and the tanks can't be totally unsupported, but outside of cities it's hard to sneak up on a tank. Air superiority is of course the ultimate trump card, tanks don't beat bombers.
Historically, I think Germany's Blitzkreig of France is the ur-example of the value of light tanks and motorized infantry. The biggest advantage of tanks over artillery is their mobility, so if you want examples of things only tanks can do look for maneuver warfare.
It's a misunderstanding/oversimplification of American armored warfare doctrine during WW2, which emphasized tank destroyers (typically in battalion-sized units attached to infantry divisions) as a defensive counter to massed enemy armored offensives. The misunderstanding comes in reading this as saying that *only* TDs should fight tanks. Tanks were seen in this doctrine as being perfectly capable of fighting other tanks. The actual point of the TD emphasis was that the design differences optimized TDs for a defensive response role, while tanks were optimized for breakthrough and exploitation: American TDs of mid-to-late WW2 were somewhat cheaper, a bit faster, and mounted heavier guns than tanks of the same generation, at the cost of substantially lighter armor, making them better at responding to enemy offensives and fighting defensively from cover with infantry and artillery support, but much less survivable on the offensive. This, TDs were the first-line response in support of infantry for enemy armored offensives, freeing up tanks for things they did better than TDs.
When the doctrine was first developed, the intent was for TD to be a lot cheaper than tanks, initially conceived as light infantry-support antitank guns towed or mounted on jeeps or light trucks. This way, you could have TDs everywhere you needed them for a fraction of the cost of tanks. But by mid-war, TD designs got more capable, more tank-like, and correspondingly more expensive so an M4 Sherman tank wound up being only a little more expensive than an M10 tank destroyer.
What would have happened if the Germany had done the invasion of France with 50% less tanks and more motorized infantry instead? Would it have been worse, and if so why?
This is a random place to post this but I asked a question like this before on an open thread and Scott responded. Any help appreciated from anyone with relevant medical industry knowledge.
I am certain I have ADHD. It hugely affects my job performance and I'm constantly worried about getting fired. I'm hoping to get prescribed adderall. I'm wondering what the chances are with my current planned process:
-I have made an appointment with a telehealth psychiatrist through some large online group.
-This site specifically says they themselves do not prescribe drugs like Xanax and adderall, but that if they think it's necessary they can fax your PCP to have them make a prescription.
-I have made an appointment to see someone as a PCP next week, two days before the psychiatry appointment.
-This appointment is my first time meeting them and I said I wanted to talk about ADHD in an office visit
-But they are a Nurse Practitioner, so I have no idea if they're allowed to prescribe anything/more hesitant.
Checkout Ahead (helloahead.com). The PNP I work with prescribed me Adderall. No contact with a PCP (which I don't have). They also manage my anxiety meds.
Some NPs are not allowed to prescribe certain drugs. It should be fine for you to call and clarify with them, asking them if you come with a referral will they be able to prescribe for you. Congrats on taking steps towards treatment.
The authors of Meta-analysis studies should be required to present a table listing the included studies and excluded studies along with the exclusion criteria that each of the excluded studies failed to meet. That is all.
They should also list their reasoning behind their inclusion/exclusion criteria!
Just finished reading two meta-analyses with differing conclusions, and I've come to the conclusion that I'm no closer to the truth than I was before this exercise. I don't know if one or both of the authors are trying to pull a fast one. And without being able to look at the studies included in the meta, I cannot judge the validity of either of the metas.
Wow, that's irritating. They put "Meta" in their title and become arbiters who can't be challenged. (Sort of like freakin' Facebook changing it's name.). What field are these articles in? I've read a dozen or so psychology and neurology meta-analyses recently and they all really spelled out their criteria for judging study quality. Many did not list all the articles they considered, but in many cases that seemed reasonable -- there were thousands. Maybe writers of metas should be required to publish that info in an appendix, though.
It was two meta-analyses of long-term COVID. Actually, they didn't have opposite conclusions—just different conclusions. They spelled out their criteria, but it was what wasn't mentioned that made me unable to compare the two. The first M-A's criteria was only studies published in the English language. The other one didn't mention that criteria, so I'm left wondering if they had a more diverse pool of international studies, and that that may account for the difference in conclusions.
Also the first study had a criteria that each study have a minimum sample size of 30. The other study had higher sample size criteria. But I'm left wondering why the first study was OK with 30. Seems too small to be statistically valid (?). And how many of those studies in the first one had sample sizes less than 100? Aarrggghhhh.
About study size: the smaller the study, the larger the effect size has to be to capture it. If I wanted to find out whether capybaras weighed more than hamsters, and compared 2 groups of 15 randomly selected members of each species, I would definitely find a statistically significant difference in weight. If a small study finds a statistically significant difference, then you can trust the result (unless there is some other problem with the study design -- such as wrong statistical methods done, groups compared were different in important ways beyond the one you were checking for). However, if small study finds no effect that may be because effect doesn't exist or just because study was too small to have enough power to capture the effect.
About getting info about long Covid: Epidemiologist Jetelina, on Substack, just put up a series of posts on the subject. So this is sort of her unofficial meta-analysis. I trust her, mostly. She’s smart and thorough and seems to have no ax to grind.
Ha! I got kicked off her Facebook group. It was early in the pandemic and I was questioning the some of the consensus wisdom of epidemiological theory. I thought I was polite, but I got exiled from that FB group. OTOH, I can be rather outspoken so maybe I deserved it. ;-)
Wow. Well, she does have a bit of a kindergarten teacher quality to her -- sort of pathologically nice and hyperconventional. If you can still stand to have anything further to do with her, her Substack blog seems pretty good to me -- informative, and politics-free.
As for getting kicked off things -- I'm a member of the too-rude-to-remain club too. Spent a year on Twitter, driven crazy by snark and trolls even though I only followed science writers. One day a red-state male troll started dropping turds on a thread about some technical virus thing, so I came back with the term most likely to offend his demographic: cocksucker. Now I'm banned from Twitter and glad of it. Heh.
‘ McDonald's sells "food” that is absolutely impervious to rot and decay. You can buy one of their hamburgers, put in on a shelf in your living room and just leave it there. After a year the burger will still look and smell the same. None of the rodents that you unwittingly share your house with will have deigned to touch it. Nor will any insect, no fly no wasp, nothing. Even bacteria will stay away from McDonald's products. This will give you an idea of the quality of American fast food. KFC specializes in products made from bio-engineered, hormone and antibiotics-fed chickens growing so fast they never learn to walk. The meat from such creatures will probably help accelerate your transition from "cisgender” to anything in the LGBTQ spectrum, whether you want it or not. Starbuck's specializes in something it dares to call coffee but that anyone who really knows and likes coffee will shun.’
I never felt "wow this is quality food" while eating McD, it always tastes like some kind of space colony faux food that's supposed to remind me of what they used to eat back on Earth, but fails at it.
> None of the rodents that you unwittingly share your house with will have deigned to touch it
McDonald's, KFC and especially Starbucks may be terrible, but I'm glad I live in a country where it's not simply taken for granted that every house has rodents.
"Degenerate Western food makes you trans" (presumably in contrast to virile, natural Russian foodstuffs) is a take so cartoonishly anti-Woke I didn't think pravda.ru would actually publish it.
That sort of stuff is aimed at Poland, Czech Republic etc. Russia is the defender of traditional values. It’s an on going thing with Pravda. Trying to appeal to East European NATO countries. I know. It’s all so very strange.
The grammar is pretty good in this one. I’m guessing it was written in English by a human. The machine translated stuff usually makes a hash of idioms. The underlying differences in grammars show through too.
Are regular KKK meetings actually a thing? I was under the impression that the KKK doesn't meaningfully exist any more. This isn't Robert Byrd's day.
The ADL (whose incentives certainly run towards maximising rather than minimising the extent of Klan activity) most recently https://www.adl.org/education/resources/reports/state-of-the-kkk reports the existence of thirty groups claiming to be the Klan, but most of them are just a handful of people and they tend to pop in and out of existence rather rapidly as people lose interest.
Sadly, the actual results seem to only be in the article body, which is behind a paywall. Unless someone here has institutional access to Sagepub or JSTOR and feels like reading it and summarizing for us.
I don't like that study because it focuses on "former VIOLENT U.S. White supremacists" (emphasis mine). I'd rather see the data for the KKK members who don't actually act on their racist beliefs by attacking nonwhites (they probably form the group's majority).
My hypothesis is that, if you're so racist that you're actually willing to go to KKK meetings, you're probably mentally ill.
Seems like a fair comparison group would be other extremists, both left, right, & totally disaffiliated -- like test the people in Anonymous (if only they weren't all anonymous!). It may be that unhappy and desperate people are drawn to extremism, and/or extreme views create desperation. If some of the wilder apocalyptic theories were true, suicide might be a rational choice.
Notably, he says that Alexsandr Dugin ("Putin's brain") wrote a book in 1997 called "Foundations of Geopolitics: the Geopolitical Future of Russia" which is Putin's playbook and can be used to understand and predict Putin's moves.
The book's 40-year plan:
Step 1. Invade Georgia
Step 2. Annex Crimea and control Ukraine
Step 3. Separate Great Britain from Europe (Brexit?)
Step 4. Chaos: sow division in Britain and the US
Step 5. Create "Eurasia" which (based on the map) looks like basically Russia surrounded by "buffer states", with China "divided and in turmoil", and Japan and India as allies of Russia (I note that while Japan voted to condemn Russia's invasion, India was neutral and is now setting up a special payment system to avoid commerce interruptions caused by sanctions. Evidently Russia bagged China as a Russian ally instead of Japan. My impression is that while China isn't completely sold on the invasion yet, it is spiritually siding with Russia and the reason it isn't doing more to help Russia is that it fears "secondary sanctions".)
Steps 3 and 4 involve using the 3 Ds, Deception, Destabilization and Disinformation, to create internal divisions in Britain and the U.S.; internal divisions in the U.S. are to make the U.S. more isolatist and distant from Europe (hence Putin's support for Trump, who in turn pulled out of multiple international treaties). Also the book calls for a "Continental Russia-Islamist alliance [as] the foundation of anti-Atlanticist strategy" (hence Russia's ties to Iran & Syria)
He also has a video about China's master plan: https://www.youtube.com/watch?v=WaAOss6W1u0 - this video begins by telling me that China already beat the U.S. on the metric of "GDP by PPP (purchasing power parity)" in 2014, though note that the per-capita *incomes* of Chinese people by PPP are just over one-fourth the incomes of U.S. people. Which itself is probably part of the plan... to sacrifice income for more GDP and more power on the world stage. China seeks world domination, and on the economic front, they seem to be ahead of schedule.
Oh, now that makes me wonder if some Chinese policy wonk read that book and decided that instead of letting Russia be Ruler of Eurasia, with Japan as an ally and China internally divided and in turmoil, it would be smarter to cosy up to Russia, sell themselves as an ally, and remain a major, non-conflicted, partner if or when Eurasia is a thing that happens.
That would explain (to me) why China is lining up with Russia right now, instead of sitting back and seeing how things play out. Even if Russia manages to shoot itself in both feet with Ukraine, China can still be the "I'm your friend, see how I supported you?" partner and be in a good position to gather up the fragments from the fall-out if Russia instead starts falling apart with internal conflict.
Meanwhile, in this video he predicted a near-term financial crisis which didn't happen (we just got some inflation, and if there's a crisis now I think it'll be triggered by Russia) - I suspect that he doesn't understand macroeconomics well enough (which is not unusual; my impression is that even economists themselves have multiple incompatible models that make different predictions): https://www.youtube.com/watch?v=EYOVoQT2yQg
In his otherwise reasonably accurate 2020 prediction video about vaccines, he characterizes what sounds like it should have been a crisis in 2021 as "Massive wave of bankruptcies, unemployment rise again, and debt bubbles bursting": https://www.youtube.com/watch?v=yahfx_JIihQ ... looks like he overweights the importance of debt and QE https://www.youtube.com/watch?v=nUOVRo_EIrE ... whereas my model is closer to market monetarism: I do not find debt to be important except indirectly, and I expected a "market adjustment" but no crash.
I have a friend who's in the market for a dating coach. He's in the general Astral Codex Ten audience demographics - about 30 years old, tech professional, generally liberal. He has been unable to find a good option with experience working in those demographics. Does anyone have a person to refer him to?
I would advise finding a local one. Dating advice changes quite significantly depending on country or even city. A local coach knows local quirks and also good spots/locations to meet people.
Also check the coaches age. A lot of them are mid 20s or even younger. The game works differently for 30+. A 22 year old won't give you useful advice for your age bracket.
If your friend is into online dating I might give some pointers on optimizing his profile. Been doing a lot of work on this topic for a machine learning project I'm working on.
In general the youtube channel "School of Attraction" has a lot of online dating advice. Also search for "Reddit Tinder Guide". There are multiple good ones.
More than that we need to go into the specifics of the actual profile. I'm pretty new at substack and don't know if there is any kind of personal message. But if you want you can contact me and i can give you/your friend specific advice about the profile(s)
I’m going to assume that date coaching is a thing now. It wasn’t when I was single.
Is your friend very shy? That would make it harder. If that is part of the problem, he could try getting regular exercise. Cardio and weight training relieve anxiety and help with self confidence.
If he is up to it, being able to dance a bit would give him a chance to meet potential partners.
I grew up in Russia (but left to the US as a teenager, on my father's H1B visa). This means I still know a number of people in Russia who are disproportionately techy, and statistically I'd expect some of them to be interested in no longer being in Russia right now. (Some will have left already, some will want to stay no matter what.) Is there a more effective way to look for jobs that might sponsor them than "ask your company if they'll sponsor a visa, ask your friends to ask their companies if they'll sponsor a visa, etc."? Also, consider yourself asked :)
I work for a biggish consulting company and our policy is to sponsor advanced degree holders (MBAs, PhDs, MDs etc.) coming in at the consultant level but not for analysts who usually come in at the undergrad (BS/BA) level. That's for the U.S., not sure what the policy is in our international offices. I've also asked about expanding sponsorship to further down the ladder and it's definitely being considered but I don't think that policy is likely to change at least in the short term.
If you know anyone who might be interested they can reach me for more info at gbz.uraarffrl@tznvy.pbz (rot13)
Note that I asked this again lower down in the advert post, and they (Dave92f1) said that they're willing in principle but haven't done it in practice, and also that they'd consider remote.
Also, if you want to send a resume my way, my company (http://www.cyberoptics.com/) is looking for at least one software person and at least in principle willing to sponsor people; I'm lastname at gmail.
Scott, thank you for helping set me on the path towards effective altruism. Your writing was deeply influential to me in high school and early college, and I think it was a really big part of why I got into EA (where I get a lot of self-esteem from these days). Since I think its relevant, I'm a senior software engineer at a FAANG and I donate around 30% of my pre-tax income, so include some fraction of that in your total impact!
[Context, I'm reading through 'What got you here won't get you there'. It recommends thanking the top 25 folks most influential in your professional life. Scott handily qualifies for me.]
Seconded. I'm in the middle of a career switch from lucrative but soulless software dev to medical bioinformatics that's socially useful, very interesting, very frustrating, and paying next to nothing.
Scott's writing, especially UNSONG, has been one of the main things that pushed me to finally do it and ruin/fix my life.
Has anyone seen studies citing a failure rate of one country invading another country? It seems like this might help forecasters take the outside view of Russia-Ukraine. I can't figure out the right search terms.
I expect a wider range than the 70%-90% for M&A failure. One reason is the difficulty of identifying invasions due to proxy warfare. (Should the Bay of Pigs landing by anti-Castro Cuban exiles be classified as the US invading Cuba by proxy, or an abortive civil war?) Another reason is the difficulty of defining failure, since political goals are harder to evaluate than corporate profits/losses.
The rates probably vary by technological era, as new weapons make offense or defense easier.
I have a theory that Scott pseudonymously wrote an alchemical allegory disguised as a bad Harry Potter fanfic, but nobody got the joke, so he had to write a whole essay explaining it. If true, that is my favourite.
Reading "Sort by Controversial" and the comments thereof made me learn the origin of the phrase "not by one iota", which is now one of my favorite facts. Is this what people learn in Sunday School? Why did no one tell me?
> The First Council of Nicaea in 325 debated the terms homoousios and homoiousios. The word homoousios means "same substance", whereas the word homoiousios means "similar substance". The council affirmed the Father, Son, and Holy Spirit (Godhead) are of the homoousious (same substance). This is the source of the English idiom "differ not by one iota." Note that the words homoousios and homoiousios differ only by one 'i' (or the Greek letter iota). Thus, to say two things differ not one iota, is to say that they are the same substance.
Except, is that etymology true? The online dictionaries I check don't mention it as an origin for that meaning and instead say it's from iota being the smallest letter and therefore almost insignificant. (The do mention that 'jot' derives from this, as iota also is transcribed as jota.) And Wiktionary quotes it as being from the New Testament "until heaven and earth pass away, not an iota, not a dot, will pass from the Law". So that predates Nicaea (I presume?).
So, uh, is this a scissor statement? Discuss at your own risk.
One piece of evidence against this etymology - Hebrew has an expression "On the tip of a yod", which means "decided based on a really tiny difference between two otherwise equal things", which feels like the same expression. Yod is the Hebrew alphabet version of iota and is just a really small letter (אבגדהוזחטי - yod is the little one on the left). Iota is also a pretty small letter. So if "on the tip of a yod" and "by one iota" have the same origin it was probably from the graphics of it rather than some complex greek spelling.
If there's any skill of Putin's I don't doubt, it's hand to hand combat. Musk would only have a chance because he's challenging a 70 year old man, so it's a lose-lose situation for him - either beat up a harmless grandpa or even worse, get beat up by a harmless grandpa that happens to be a sambo black belt.
Putin will select a capsule-shaped object whose purpose and usage is known only to himself and a couple of guys in the FSB who couldn't figure out how to leak the info to Musk before the fight. RIP Musk.
Well, Putin theoretically has the power to stop the Invasion of Ukraine, but Elon certainly doesn't have the power to... force Ukraine to surrender? ... so this simply doesn't work even in principle; Putin doesn't have anything to win.
Is heavy meat eating in humans an adaptation for famine resistance?
There are periodic droughts, blights, etc that hurt crop yields. If the crops that are grown all go to feeding people, then any drop in yield means someone goes hungry. Meat eating provides resilience.
1.) Some animal feed (corn, turnips, etc) is also edible by people. In times of famine, humans can eat this. Animals go hungry (or production decreased) instead of people or switch to non-human edible food (like grass).
2.) Animals can be slaughtered during times of famine. By killing animals early or killing animals kept for eggs or dairy, additional calories can be gained in the present.
Right now we overproduce food (as measured by calories) and invest the excess in producing meat/dairy/eggs. As society moves towards less animal based food, are we going to get rid of our safety margin? Are we going to become more vulnerable to famine?
I think it's mostly an adaptation to A: humans evolving long before agriculture, as hunter-gatherers, and B: agrarian humanity finding itself on a planet with a lot more mediocre land suitable for grazing livestock than good land suitable for growing grain.
I'm not convinced that "society moving to less animal based food" is an actual trend. While it may be a trend in the particular geographical areas and social classes in which ACX commenters tend to move, I think this trend is more than cancelled out by the billions of people slowly moving out of poverty and finding themselves able to afford to eat more meat. In China, for instance, annual meat consumption has gone from 10kg per capita to 50 kg per capita since 1980: https://www.researchgate.net/figure/continued-growth-projected-in-chinas-per-capita-meat-consumption-source-usda_fig5_321111368
It's unlikely to be a deeply genetic adaptation, but you could see it as a cultural adaptation, in some places, in some contexts.
Meat animals could be used as a store of calories in pastoral cultures. At the same time, the animals would be used to turn poor land into usable calories in the first place - i.e., having a flock of goats pasture in the inarable, rocky scrub of the near east or herds of cattle ranging across the dry American west.
More typically, though, food preservation was the buffer against famine. Animals only live so long when you don't feed them.
So people learned to store grain, ferment sugars, salt meat, make cheese of milk. We still do some of these things, and we also can and freeze and so on. In the case of famine, we would still mostly rely on these methods - and would likely devote less calories to meat production to re-establish a buffer going forward, but that wouldn't change our food supplies then.
Our food buffer is HUGE in historical terms. We just produce a crap-ton of calories with modern agriculture. So no, I don't think we're becoming more susceptible to famine. Say what you will about it from a gustatory perspective, a sack of rice, cans of beans, and a pallet of spam tins keeps a heck of a lot better in a basement than a live cow.
No genetic adaptations!? Teeth shape and size are genetic adaptations, and humans have omnivore teeth — sharp front teeth (incisors and canines) to rip and cut meat as well as flat molars to crush plant material and chew meat. Indeed, the acquisition of fire between 1 and 2 million years ago (depending on which group of paleoanthropologists you listen to) resulted in the more efficient digestion of animal protein and very likely affected the shape of our mouths as well as the shape of cranium.
Despite PETA claims that human digestive systems are that of herbivores, we don't have those specialized digestive sacs that herbivores have evolved to temporarily hold plant material (along with the specialized gut fermentation bacteria) to ferment that plant material. Ruminants like Cervidae and Bovidae do their fermentation in forward sacs, whereas Equidae, Rhinocerotidae, and all of the Cercopithecidae (I think) have posterior hindgut sacs. Omnivores and carnivores lack those specialized storage and digestive sacs to ferment plant material — as do humans. Also, humans, like other other omnivores, have intestinal tracts are in between the length carnivores and herbivores.
Humans share all those traits with the other primates, which, in general, obtain the bulk of their calories from plants, not meat.
Gorillas are nearly vegan. And yet, gorillas have sharp front teeth (much sharper, in fact, than ours!), and guts like our guts, without any of those specialized sacs.
One could make a similar point, by the way, about the famous frontally placed eyes, often mentioned as evidence that humans evolved primarily as hunters. All apes have them. And yet, gorillas don't hunt.
I think the main way in which the acquisition of fire matters, and may have driven changes in our anatomy, is not that cooking allows you to digest meat, but that it expands the range of plants you can eat, to include starchy roots, grains and legumes, which grow in the wild.
You can eat meat raw, but you can't obtain many calories from a raw potato. Present day adherents to "raw" (in the sense of uncooked) diets can eat any meat, but the range of plants they can eat is limited.
There is much evidence of consumption of wild grains before there was agriculture, and of course foragers eat wild starchy roots. This would have needed fire.
I don't mean to argue that human beings are meant to be literally vegan, don't get me wrong.
Yes, agricultural humans get most of their calories from plants, but for non-agricultural societies that's not necessarily true. For instance coastal human societies get most of their calories from fish, seafood, and marine mammals. Of course, the Inuit don't get much in the vegetables, but fisher societies like the Kwakiutl of Northwest survived on dried fish (fire required) and seals for most of the winter season. Their diet was supplemented with berries and nuts in the summer, but by far their largest caloric intake was from animal protein.
Archeological evidence shows that coastal humans have been exploiting the high protein resources of littoral regions for hundreds of thousands of years — even before modern humans — at least back to 200kya. Plus shells were traded inland as decorative objects as early as 60kya, and probably earlier.
In more modern times, cattle herders of the Southern Sudan and all along the Sahel, have a very high protein diet with lots of dairy in it. Some millet and maize, but vegetable diets are not certain in semi-arid and arid areas. Before cattle domestication, savannah dwellers definitely followed and hunted game, and their diet likely had a very high protein component. The same goes for peri-glacial dwellers in Europe where we at least one example where hundreds of mammoth were killed and butchered in a single event. It's estimated that several hundred or even a couple of thousand people would have had to participated in this hunt, and the meat was probably smoked and preserved and would have been the chief component of their diet through the long northern winters.
In the tropics and in highly fertile regions, a high-protein diet was less needed. Modern rainforest indigenous peoples have high carb diets supplemented by animal protein. It's believed that pre-agriculturalists of the middle-east survived off an abundance of seasonal plant sources.
As for your comment below about Gorilla teeth, proportionate to mouth size Gorilla molars are like 2x the size of human molars. All the better to chew uncooked plant materials. And the incisors are much larger than human incisors, but according to Dian Fossey, they are well-designed for pealing bark off trees. I don't know much about Gorillas, my training in primatology was only focused on human ancestors.
Granted fire helped to cook plant materials for humans, but there were vast tracts of the planet where humans lived where plant resources were not enough to survive on year round.
>Gorillas are nearly vegan. And yet, gorillas have sharp front teeth (much sharper, in fact, than ours!), and guts like our guts, without any of those specialized sacs.
"While gorillas are genetically similar to humans, they have very different digestive systems—more akin to those in horses. Like horses, gorillas are “hind-gut digesters” who process food primarily in their extra-long large intestines rather than their stomachs. "
I just meant that gorillas do have the particular features that Beowulf claimed are, in humans, adaptations to meat eating (sharp canines and the lack of cow-like stomachs).
As for the fact that our guts are smaller, I think it would be misleading to describe it as an adaptation to meat eating.
Instead, it represents, more generally, a shift away from fiber as a calorie source, and towards fats and carbs.
Apes such as gorillas can obtain lots of calories from the fiber in foods that don’t have so many carbs or fats in them, because, in their guts, fiber ferments, generating calories.
We can’t live on high-fiber, low-carb, low-fat foods; we live on foods high in fats and/or carbs. This doesn't mean "meat"; it means meat, fruit, nuts, starchy roots, grains, and legumes.
All the foods I just listed are available to foragers (contrary to "Paleo" myths).
In particular, starchy roots, grains and legumes require cooking for our digestive system to be capable of extracting the carb calories in them. This isn’t just because those plants are “hard to chew”. If you gulp down raw, uncooked flour, you won’t get many calories from it.
The discovery of fire, by allowing us to extract calories from such starchy plants, must have encouraged, and at least partly explains, this revolution, the shrinking of our gut and the change in what we use as fuel from fiber to fat and carbs. Because of fire, we became much better at living on wild tubers that other apes are, and this must be at least part of how we could afford to give up the ability to turn cellulose into calories.
So I think that the shrinkage of our guts isn't exactly an adaptation to meat eating, but more generally an adaptation to a whole range of foods.
Sorry, I was unclear; was responding to: "Is heavy meat eating in humans an adaptation for famine resistance?"
Humans are clearly genetically adapted to eat meat. What they likely aren't is genetically adapted to eat meat [i]specifically as a famine resistance technique[/i], except insofar as eating [i]anything[/i] is a 'famine resistance technique lol
(Edit: how the heck do you do italics in these comments?)
Ahhhh. OK. But, yes, I'd say a diversified portfolio of cultivars and domesticated food animals would provide some level famine resistance. The Irish Potato Famine comes immediately to mind as food monoculture that failed. Granted, the Irish population leading up the famine was so dense, that the average farmer didn't have the option of cultivating acres of wheat and large fields to support dairy cattle. Potatoes were the optimum solution for small plots of land—until the blight hit.
Animals are a less efficient way to produce calories. A given unit of land produces more calories with crops than with animal agriculture. Look at societies that are actually still vulnerable to famine: meat is a luxury.
So no, we're not somehow becoming vulnerable to famine because of veggie burgers, even in theory.
This is only because you're looking at through the lens of modern agro-business practices. Cattle can survive and even thrive in high desert environments (at least ones that have bunch grass). A rancher I spoke to in eastern Oregon explained that depending on the aridity and the grass density, it takes between 1 and 2 acres of land to support a single steer. Most ranchers round them up and ship the off to feed lots when they're 16 months old (if I recall), but before they're fully grown, to speed up the beef production cycle. But this rancher raises them until they're adults (2 years?), and slaughters them then. Fully grass fed. No growth hormones. No antibiotics. Raised on land that is too dry for regular crops, and too hilly for irrigation.
Iceland has a lot of sheep, which are migrating, self-feeding, and iirc return to their herders only for the winter. Outside of the kinda-fertile region around Reykjavik, Iceland in the summer looks like a sci-fi barren planet. The sheep don't mind.
Yes and no. If you look at the animals humans have domesticated, you will notice one thing about the vast majority of them: they eat something humans can't or won't.
The two big candidates are "cellulose" (cows/sheep/goats/horses/camels/water buffalo/donkeys/geese?/llamas/alpacas/rabbits/ and "vermin" (cats/ducks/chickens). Fish, which we haven't generally domesticated but hunt in massive quantities, also eat cellulose (i.e. algae) at some degree of directness (the trouble there is that many of the things that eat algae are themselves too small to eat).
*Grain-fed* cattle are a luxury, but meat and dairy in general frequently aren't (the traditional Mongol and Eskimo diets are nearly 100% animal). And even in the modern day, a grain farm does produce a lot of otherwise-useless plant matter.
That's true if you have land that could be used to produce plant-based food for humans. I believe that if you raise livestock on land too poor for crops, and by feeding them food waste, then they're pretty efficient at turning 0 calories into some calories.
That's definitely true of chickens, and sometimes true of cattle, etc. Water use, however, also needs to be counted. Also rabbits have been used in that way, though IIUC the process was a bit labor intensive.
Agriculture has always had higher water use than herding. Historically, there either had to be enough rainfall to support a yearly cycle of planting and harvesting, and, if not, irrigation had to be implemented. So agriculture clung to areas with fertile soil and plenty of water. Meanwhile nomadic herders occupied the (a) high steppes, like central Asia (where agriculture was impossible until modern grain cultivars were developed), (b) arid and semi-arid areas, like the Sahel, or (c) mountainous areas which were too difficult for terraced farming.
There have been arguments made that modern beef farming is water intensive. It need not be — if it weren't for the economics of fattening steers faster to get them to market faster. And I'm not so sure it really is as wasteful as some environmentalists and animal rights advocates claim. For instance, most all of the beef water use estimates that I've seen share the fatal flaw of assuming that corn (maize) kernels are the main component of the silage that cattle eat in feedlots. This ignores the fact (either out of ignorance or intent to deceive) that silage consists of the entire leaves, stalks, as well as the ears of corn, that are chopped up and allowed to ferment in silage tanks. So a steer is consuming entire corn plant (except for the roots). That may actually increase the water usage of feedlots, but it also means that cattle are eating cellulose laden leaves and stalks that humans are unable to digest.
BTW: before the last round of drought in California, almond growing consumed 1/10th of California's captured water. That's between 1/4 and 1/5 of all the water used in California. And cities consume less than a 10th of the captured water. There's been lots of talk recently about almond farmers making a big effort to waste less water, but I haven't seen any numbers on the conservation savings.
Again though, is that water that could have been easily used for something else? Or are they drinking water from puddles after a rainstorm and muddy creeks, and eating plants that contain water that would otherwise be inaccessible?
Agriculture and raising livestock are pretty new, while hunting is quite old. It would be surprising to me if developments of 10 thousand years or less had enough time to make a strong selection pressure (wolves were domesticated before that, but I don't think they were usually eaten; https://storymaps.arcgis.com/stories/893c422c13424a089b781564e9f69735 says the first animals domesticated for food were sheep around 10K years ago). My intuition could be entirely wrong here, though.
I suspect that technology can make for a stronger margin than animals, particularly in being able to move food from places where it is plentiful to where it is scarce, as well as preserving food. That's costly, but so is meat. An extended, worldwide famine would presumably impact livestock as well, although we could get at least nonzero food value from marginal land and food waste, which would help. A world with 0 livestock seems quite far away, though.
There's *definitely* been adaptation to agriculture. Some notable effects:
- Various modifications to alcohol dehydrogenase to reduce the likelihood of alcoholism.
- Modifications to the ergothioneine transporter to make it more efficient in populations dependent on wheat as a primary food source (which is extremely low in ergothioneine).
"Domestication" isn't an all or nothing thing, and one could argue that raindeer herding has probably been going on as long as people lived in marginal northern areas. In that sense I suspect that chickens (i.e. "Indian jungle fowl") were the first to be "domesticated", though this wouldn't mean "fenced in and only fed what we choose to feed them", but rather "people live near flocks of proto-chickens and drive the other predators away from them". Over enough time this evolved into the current situation. (That's sort of how we supposedly domesticated the dog, also. People put out garbage and the wolves came around to scavenge from it. The ones who got along better with people were more successful scavengers.)
We aren't going to get to a world with 0 livestock. But we might eventually get pretty close. (Are animals kept in zoos livestock? When Berlin was under siege during WWII the animals in the zoo were eaten.)
This seems trivially false. Humans were eating meat long before agriculture was a thing. As society moves towards less animal based foods our safety margin will be just stored differently.
That's a bit idealized. A century ago people tried to have supplies on hand to survive a year of crop failure. In modern cities most people can only survive a few weeks, and that by going hungry. It's like JIT manufacturing, pursuit of efficiency is (intentionally) done by reducing the safety margin. (I'm not asserting that's the only way it's done, just that that is intentionally part of the methods used.)
We can afford to do this because we have global trade. My country is more than self-sufficient with food, but if all the crops were suddenly wiped out at once then we'd simply switch to importing for a while.
The Irish Potato Famine happened because you couldn't just make a phone call and get half a million tons of Idaho's finest on the docks at Cork in a week; certainly not at a price the Irish could afford.
Where do you go for book recommendations? Looking for e.g. empirically-minded bloggers who discuss the quality of new books often such as marginalrevolution.
During 9/11, there was concern about backlash against innocent Arab-Americans. During COVID, there was concern about backlash against innocent Chinese-Americans. I haven't heard anyone worry about backlash against Russian-Americans now. Sure, people are cancelling Tchaikovsky concerts or whatever, and some people with Russian citizenship are having hard times, but no hate crimes against second-generation Russian immigrants or whatever.
Are we ignoring these now, were we over-panicking before, or is there some interesting difference between this situation and the others?
I was just talking to my coworker this morning about my concern for Russian hate increasing (and how, more broadly, the racist hate of the 20th century has been replaced with hate based on political beliefs and country of origin.)
Besides the already noted difference that people believe they can identify Arabs and Chinese on sight, but not Russians, I haven't seen anyone mention that this didn't happen to us. Sure, we're on the Ukrainians side and all, but we're not viscerally pissed off the way we were after 9/11.
There's been some of discussion of this on DSL. Being rationalist-adjacent at least, we're pretty much opposed to punishing random bystanders because of where they happen to be born, but there also seems to be relatively little of that happening, particularly at the "people getting beaten up in the streets" level as opposed to the symbolic and annoying Tchaikovsky-ban level.
Possibly it helps that almost no Americans are confident in their ability to distinguish random Russian-Americans from random Ukrainian-Americans.
The symbolic annoying Tchaikovsky bans, those are easier to target "accurately", and I'd like to see more pushback against that sort of thing.
I have seen a lot of posts on e.g. reddit condemning hate crimes against random Russian-Americans (I remember one from a week ago where someone had thrown bricks through the windows of a Russian-American owned business.)
To me the bigger difference is the behavior of big institutions. My university did not send out any email reminding us "hey don't start going and harrassing random Russians", the way my old university sent out an email saying "hey don't blame random Chinese people for covid." To be fair, this may be a difference in universities, since I switched in the last couple years. And also to be fair, all the phrasing I've seen on discussions/support/resources have been careful to phrase their offerings as for "anyone affected by" the invasion, which is wide enough to include Russians with various troubles.
The Chancellor of Texas A&M just sent out an e-mail a couple days ago saying "I hereby direct you to sever ties with Russian entities" and "The Texas A&M University System will not tolerate or support Russia in any way". I thought this was a bit drastic and cruel and tasteless, but as far as I can tell, the main job of the Chancellor is to reply-all to the "Happy Holidays" e-mail from the President with a "Merry Christmas" e-mail.
Good points all, though this just reminded me that my mother (in a European country) got called by her boss because he wanted to know if she'd heard the 'rumors' that all Russians employed by her employer were going to be fired. She told him that she has French citizenship and he shut up real quick but yeah, not great.
cynical answer, which I dobut is the complete story but is worth considering, is politics. it fits into a progressive worldview that innocent Arab or Chinese Americans would be victumized by an jingoistic and enraged America. it fits less well that Russians, who are coded as white, would face this type of discrmination, so progressives overemphsize the former and downplau the latter.
Alternativly, you could argue that Russians, being white, face less backlask, which doesn’t seem to me to be true, but should be considered.
I've seen concern about backlash against innocent Russian Americans, or Russians in other parts of the world that aren't Russia, and also quite reasonable demands to not blame Russians in general. The war was Putin's decision. There are courageous demonstrations in Russia against the war.
It's also true that there are Russians who favor the war, but even that isn't entirely their fault. They're being influenced by skilled propaganda. Some of them are close relatives of people under attack in Ukraine, and they don't believe first person accounts from their relatives.
There's a difference, but it's not very interesting. Putin is a blue tribe approved target of hate, so of course there will be no tut tutting about backlash. See also the lack of worry about backlash from climate deniers, anti-vaxxers, or conservatives generally. The fact that Putin is a mostly deserving target and russians are a relatively small and politically irrelevant group makes it easier, but mostly it's who, whom.
The obvious difference here is that this time, the cluster of people who write articles concerned about backlash against innocent X-Americans are much more on board with the "Yeah, fuck X" sentiment.
>but no hate crimes against second-generation Russian immigrants or whatever.
Well, it's hard to identify somebody as being second generation russian on sight (even first generation would only be marginally easier), and hard to identify that they're Russian rather than Ukrainian. Compare that with e.g. Asians who are virtually all unambiguously Asian (unless they're mixed race).
Though even if this were somehow happening, the obvious reason there would be little concern is that Russian are white, and the "worst" kind of white people (according to liberals). And most of the anger is coming from liberals, who are the people who would otherwise get angry over 'backlash'.
>Compare that with e.g. Asians who are virtually all unambiguously Asian
South Asians and East Asians, definitely. West Asia, less so, at least in terms of unalterable bodily characteristics; cultural factors (i.e. clothing and grooming choices) tend to be bigger issues there in terms of identification.
I hear it on a local level. For example when a Russian school here in Berlin was targeted by arsonists a few days ago or with more general hostility and attacks towards Russians.
Here in France there have already been threats and low-level violence/vandalism against Russian-coded establishments (restaurants, cultural centers, delis). As a Russian immigrant, I've personnally been asked to issue sweeping condemnations at work.
However, I agree that the fact that Russian-presenting and Ukrainian-presenting is very close, and therefore difficult to parse, and therefore maybe not as easy to politicize.
The interesting difference is that there is a lot of support for Ukrainians who look and speak the same as Russians according to everyone else. People that are inclined to (wrongly) hate ordinary Russians for this will still likely support Ukrainians and wouldn't be able to tell the difference.
I mean, the obvious difference is race, with a small side of politics.
I wonder if the type of person who would perform a dumb hate crime even has a mental image of what a Russian looks like, or would even have the inclination to do a little trolling for this particular cause.
I mean, everyone knows why. I'm sure Scott knows why too, and is merely exercising his famous PoC abilities.
...Principle of Charity, that is. I realized that may have been a confusing initialism to use and now I've wasted more characters explaining it than I initially saved. Damn it.
No, Scott is exercising his famous People of Color abilities, since he is a he\him masculine-presenting jew-identified person of color who has a right to question moral panics, unlike wh*te folxx.
Oh *that* David Friedman. I live in a 100+ year old house in a city named for a saint too. They do keep a guy busy. Mine is in a less temperate climate tho. ;)
Alexey Arestovich, advisor to Zelensky, predicted the war back in 2019 almost exactly play-by-play https://youtu.be/H50ho9Dlrms?t=434 (in Russian, no subs unfortunately)
Very vague question: How we can estimate my real-world impact when betting on a prediction market?
Let's say I raise a fund of 100 mln$, and then go all-in for "No" on "Will Putin resign by 1 April 2022?"
Should I expect some of his friends say to him: "You gonna resign anyway, let's at least make some money on the way out". They bet a few grand on a "Yes", announce resignation – PROFIT.
I lose my 100 mln$ (mostly), but this way I "buy" my future. (Literally buying "futures").
Sounds too naive, I know. Are there examples where this worked, in a brief history of low-liqudity prediction markets?
P.S. Idea stolen from "Assassination Politics", but I wanted to take a wholesome spin on that.
Are any Trumpists admitting they were wrong about Trump, given Trump's positive views on Putin?
This isn't an attempt at point scoring. Most right-wingers here still like to talk about Russia-Gate and how that was fake. Whereas it's pretty clear that Trump was a Russian asset, maybe not in the John Le Carre sense but in the literal sense that he was an asset to Russia.
It's relevant because it was obvious to many that Trump's admiration for the clearly evil dictator Putin was abominable -- it was a main cause of so-called Trump Derangement Syndrome.
How about some ex-Trumpists now admitting they had shit-brains for judgment about these matters?
No, because "Trump has positive views on Putin" was just an anti-Trump meme rather than anything backed up by actual statements.
I haven't read every word that Trump has ever said, but I've read a lot of articles that lead with the headline "Trump praises Putin" but which on further examination turn out to only quote one word, "smart", out of a two hour speech. And it's always in the context of a standard Trump riff about how Obama/Biden are dumb and our enemies are smart and our enemies keep taking advantage of how dumb Obama/Biden are.
I'm not a Trumpist. But I want to politely suggest that it's hard to get any accurate picture of what Trump thinks about Putin. First most of the media still has TDS, so that will give you a distorted view. 2.) Trump says different things to different audiences, so what do you take as his 'real' view? 3.) My personal view is that Trump praises dictators, because he sees that as the best way to get what he wants from them, he also criticizes democracies for the same reason. You can disagree with his approach to world politics, but I think it's a mistake to say that he has positive views, just because he praises Putin.
My personal view is that Trump admires strength, cunning, and winning by any means ... In himself and others. He praises dictators for those qualities because he thinks he is like them, and they are like him.
If you discount public statements, then it's very hard to get an accurate picture of what *any* politician thinks about *any* topic – not just Trump about Putin.
> Trump says different things to different audiences, so what do you take as his 'real' view?
What other opinions has Trump voiced about Putin to other audiences?
> My personal view is that Trump praises dictators, because he sees that as the best way to get what he wants from them, he also criticizes democracies for the same reason.
That doesn't make a lot of sense. If Trump wanted to appease dictators (and if we assume that he's competent at it), wouldn't he praise them and criticise democracy in private, to their face? Instead of alienating your base for little to no gain?
Like I said, you can disagree on his approach to world politics. I think all Trumps statements about any dictator had one audience, and that was said dictator. With this model it's much easier (for me) to understand his behavior. Understanding does not imply support or agreement with said behavior. I see Scott has added the no politics caveat at the top of this post, so we should probably postpone this for another thread.
I'm firmly in the anti-Trump camp and convinced that Trump secretly admires Putin for his attitude towards free journalism and democracy, and even I think your comment is terrible and deserves more than a mild warning – non-CW thread or not.
Positive view is not support (and, for that matter, negative view is not opposition).
There is a recurring idea that the far-right just *love* Putin and will <insert homophobic joke, but this time it's totally Ok because it's the rightists that are homos>. There may be some, but I suspect the majority of the western far-rigth have a simple respect for Putin. Respect for Strength (or the image of it), of upholding the strategic interest of his nations (rather than approval from NYT op-ed) and of traditionnal values (rather than destruction of them).
From there, you can regret the invasion of Ukraine but understand that 1- It's necessary (or perceived as such) for said strategic interests of the Russian people and 2- May be a mistake, but that doesn't invalidate anything above.
I recall an interview of Putin, some weeks before the the invasion, where he said something along the line of "I'm not your friend, I don't want to be your friend, I'm the president of the Russian Federation". Putin's role is not to be loved by the west, it's to protect Russia's future (and again, trying to and miscalculating isn't the same as what is perceived, amongst western right-wing, as the total rejection of western elites to protect western future). They don't love Putin and what he does for Russia, they love a leader who does what is best for their country (and whish they had one).
All this to lead to Trump, which may not have been exactly that, but at least a step into a different direction than the current behaviour of western elites (which could be summed up as "defect on the west at all cost").
And of course, there's the simple fact that Putin annexed Crimea when Obama was President, invaded Ukraine while Biden was president, and stayed put for 4 years while Trump was there (in fact, separatists lost ground during that time, from what I can tell). How to square that with the idea that Trump was ameniable to Putin?
I may have shit for brains, but at least my model fits verifiable facts.
How the f- do you want to say this isn't point scoring, and then call your ideological opponents SHIT BRAINS
In any case, no, you're wrong, completely wrong. Trump has condemned the invasion and his "praise" of Putin is transparently an opportunistic attempt as criticizing his political opponents, as in "these democrats aren't smart or tough enough to deal with Putin. We wouldn't be in this situation if I were still president"
But again, he condemned the invasion in no uncertain terms:
“The Russian attack on Ukraine is appalling,” he told the Conservative Political Action Conference (CPAC) in Orlando, Florida, on Saturday night. “It’s an outrage and an atrocity that should never have been allowed to occur"
Stating that Putin is 'evil' is really just a way of saying that you don't understand his motives or the political background to the current war. All war is evil, sometimes a necessary evil, but evil nonetheless.
I would suggest that the people showing poor judgement are those who ignored the warning signs from Russia for the past 3 decades.
I think we have different definitions of either "evil" or "necessity". The Donner party were not evil for eating companions who had died. That was necessary (in order to live). I would not be evil for eating an extra ice cream cone, even though it would be an extremely foolish thing for me to do. Putin, Stalin, and Hitler were/are evil. They do gross harm to others without necessity.
War is not, in and of itself, inherently evil. Intentionally starting one when you don't need to is. (War *will* inherently contain acts of evil, but I accept the possibility of just wars. I think the US entry into WWII was not evil, though the Japanese assault on Pearl Harbor was, even if they were maneuvered into doing so.)
No, he annexed Crimea to be able to maintain access to a warm water port. He armed the rebel factions in Eastern Ukraine to attempt to weaken the post-2014 coup Ukrainian government. The invasion is apparently an attempt to finally remove that government entirely and replace it with a Russian-aligned one.
I forgot the put the poltiics disclaimer on this thread, so I can't blame you for that, but this is a bit more hostile than we usually do around here. Consider yourself mildly warned.
I've seen people suspended on here for a hell of lot less than calling an out group "shit brains". I was warned much more sternly for saying somebody shouldn't comment on a scientific topic they don't know about. How on earth does an extremely strong out-group swipe like SHIT BRAINS get a "mildly warned"? The fact that this isn't a politics thread is irrelevant, this shouldn't be allowed in either case.
Agreed, even on a politics-allowed thread I would have thought that comment would get more than a mild warning. Also note OP's further comments here (e.g. "You people are disgusting. Go to hell.").
That ship had sailed about 25 years ago. Since then Russia has essentially molded its ideology and image into being ostentatiously anti-US and anti-West in general, and being wilfully ignorant of it these days is patently absurd. Which is of course why Putin liked Trump so much, he was willingly embarassing the US of his own accord, so helping him in this endeavor in any way possible went without saying. I don't know how impactful that help actually was, but if we're to take the US intelligence at its word, the "election interference" played a decisive role in Trump's victory, which would likely make it a bigger coup than any Cold War operation.
Nah, if America was willing to do that it would've happened in 1946, with much less risk. Right now the Iran/North Korea scenario seems to be the unavoidable "least bad" option.
Are there any practical models for how to run a flexible, competent authoritarian government? Like in political science, organizational structure, etc. I'm not pro-authoritarianism (I swear!), I'm just sort of interested as to how across countries and cultures they keep running into the same problems- inability to deliver bad news up the command structure, lots of inefficient corruption, very suboptimal ways to transfer power when the strongman dies, unmeritocratic because loyalty is rewarded over competence, and worst of all, rigidity and inability to change over the decades as needed in a changing world. There's a reason all of the per capita wealthiest countries are democracies. This is meant not as a moralizing analysis, but just as a practical one- this kind of thing is not the optimal way to run a country! https://www.youtube.com/watch?v=ucEs0nBuowE
Do you.... have a council of various elites serving as like a board of directors, with a strongman chief? Is that the way? It seems like you'd want some sort of constitutional system where say the above intelligence chief serves at the pleasure of the board, hopefully so they can get better intel and not just groveling out of him. How can you ensure a meritocracy, so that whoever rises to the top of the military or an agency is actually intelligent and not just a lacky? Fascist and/or right-wing models tend to leave existing private business in place- could those owners get any say in society as an organized interest group, to prevent the strongman from going off on destructive whims or something? Could a feudal structure (hierarchy of various nobles) actually work in a 21st century country? (Maybe Moldbug has written about this, I don't know)
Almost all of the countries that have produced economic 'miracles ' in the last half century or so have been authoritarian. South Korea, Singapore, China. It mostly seems to depend on lucking into the right autocrat (park, Lee, Deng).
Democracies in fact seem to be very poorly equipped to become rich quickly (or, in the longer scheme of things, stay rich). Mostly this appears to be because of classic collective action problems. Special interest groups and incentives of political actors force countries into sub optimal equilibriums for much much longer than is 'needed'. Mancur Olson covers this stuff very well in Rise and Decline.
> Are there any practical models for how to run a flexible, competent authoritarian government?
By definition, can an authoritarian government have real internal error correction?
Power corrupts, and all but the most trivial systems become corrupted over time. Especially a system of humans. From the largest government, to your town counsel, to the policeman on the street, or your local HOA - the power and control of resources attract people who will exploit them for personal gain. Separation of powers sets corruption in one branch of the government to collide with the different self interests from the other branches. How do you replicate that in a true authoritarian situation?
I'm not really that knowledgeable about political science or social structures, but I will talk out of my ass anyway. I would say that, in some sense, you're artificially *defining* authoritarianism to have the kind of problems you mention, or confusing orthogonal problems (== could happen in both "authoritarianisms" and non-"authoritarianisms") to be solely problems of authoritarianism.
For example, this :
>inability to deliver bad news up the command structure
has nothing to do with authoritarianism, it's purely a problem of the "narrow" sampling of underlings. If the supreme-leader has only one view into the external world, whatever his $AID says, there's a single point of failure to the su-le sensor suit. If/when $AID fails or misfires (accidentally or due to incentives), su-le ceases to sense the external world and starts acting based on whatever imaginery input $AID provides, often with disastrous consequences when su-le choices are fedback to the real world.
This can happen in "democracies", I recall something I read once about drone images of Iraq capturing nothing plausible about WMDs, but as the results travelled further and further up to Bush, every layer of interpretation added more and more danger till "{IRAQ: WMDS, CERTAINTY:95%}" somehow got to Bush. The reasons for the Iraq invasion is probably more complex than Bush's underlings bamboozling him, but this is just an example off the top of my head for why this problem is far from unique to authoritarianism.
Hell, consider a fictional ideal democracy where the su-le's view of the world is a function of 75% the people's opinions (perfectly transmitted, the su-le perfectly knows and experience what every single citizen thinks, and combines the least-conflicting 75% into a final decision). If $MEDIA_EMPIRE captures the information streams of 75% of the people, then the whole "democracy"'s view of the world is whatever $MEDIA_EMPIRE wants. Perhaps your thought process is something like "democracies distribute power so that this failure state is less plausible than in authoritarianisms", but consider that (1) the thought experiment is highly ideal, real democracies are actually surprisingly locally-similar to authoritarianisms at a myriad of levels, including the president (2) Capturing a large group of people's info streams is not automatically harder than capturing a smaller group's, for Mark Zuckerberg, hacking facebook's ~2billion users' view of the world is vastly easier than influencing China's Xi Jinping (3) Democracy expends massive amount of effort and complexity trying to distribute the power, which could have been better spent at distributing the power and info streams of the supreme-leader and their close circle instead, this is much much easier and less resource-consuming. (As per (1) and (2), democracy still often fails to distribute power despite all what it tries to do).
Then we have :
>lots of inefficient corruption
Also completely orthogonal to authoritarianism, corruption is people bypassing the established rules in favor of their own, ad-hoc, informal rules. It happens whenever people don't believe the established rules are fair AND they can get away with breaking them. Corruption is not possible if any of the 2 conditions are broken. If anything, authoritarianism should be *less* susceptible to corruption, as they're stereotypically better at indoctrination (helpful for convincing people rules are fair) and surveillance-punishment (helpful for convincing people they can't get away with violating rules).
>very suboptimal ways to transfer power when the strongman dies
I assume you're talking about violence ? Strictly speaking, violence is not really sub-optimal if whatever dumpster fire that happens never harms anyone but the (losing) candidates and doesn't extend too much. But really, this is begging the question: if you're assuming that violence always happens during transfers of power in authoritarianisms, you're really assuming authoritarianism has already failed, it's redundant to ask why it failed, you just assumed it did.
What prevents a losing US president from getting a cartonish red face when the ballot results are out and starting a civil war ? Stories. Very powerful stories. "Democracy","Constitution","The Will Of The People","The Founding Fathers","America Is Different", etc.. etc.. etc...Even if a losing candidate doesn't believe in ALL of those stories, they *know* that a lot of people do, enough to think very very carefully before violating them, and - till now at least - the cost-benefit always yields that it's not worth it.
What prevents a losing candidate in an authoritarianism from losing it and starting a civil war ? Just like Democracies, Stories. Just different stories. My own country is a 'soft' military dictatorship since 1952, nearly 3/4 of a century. Transfer of power is, with the except of 2 exceptions, always peaceful, probably less loud and less expensive than America's elections in fact. The only 2 exceptions ? One when we tried to make a democracy, and one when the embryonic democracy failed and a new dictator had to do some housekeeping normally not required when dictators pass control to each other.
So your fundamental assertion in the above statement really boils down to "democractic stories are easier to maintain and spread than authoritarian stories", which is not true in general, you can convince anyone of any story if you have good enough storytellers.
>unmeritocratic because loyalty is rewarded over competence
Come on come on COME ON. You're really leaning hard on the poor authoritarian bastards here. American universities favor people based on how much melanin their skin genes expressed, US presidents routinely hire Supreme Court judges from their own party. Those are not examples of "loyalty over competence" ?
Again, a completely independent and orthogonal failure mode that happens for it's own complex reasons.
>rigidity and inability to change over the decades as needed in a changing world.
Really ? China didn't drastically alter it's economic organization in response to a changing world ? Singapore and Saudi Arabia didn't build their own deadly symbiosis with technological globalized capitalism starting from *pre-industrial* economies? You sure you're not letting your moral views color your perceptions?
My own view, summed up in short sentences without justification :
- Authoritarianism is forced centralization.
- It's extremly simple to reason about and astonishingly efficient. It's the simplest solution to the problem of distributed consensus.
- There's no inherent fault or bottleneck in it at all, just bad implementors.
- Typical models of Authoritarianism is biased toward assigning incomptence because most major notable examples of authoritarianism since the last century were communist dictatorships, and the occasional Nazi or Fascist dictatorship, which were disasters for reasons completely unrelated to their Authoritarianism.
- Typical models of Democracy are biased toward assigning comptence because of Western democracies's temporary 20th century fluke, which again, is extremly confounded by everything from the rise of empirical sciences in the 15th-19th century (has nothing to do with democracy), colonialism (has nothing to do with democracy) and capitalism (has nothing to do with democracy).
Isn't this pretty much Singapore? I always had the sense that Singapore under Lee Kuan Yew was what Russia should have been if Putin was smart and not evil.
Pretty good call. They seem to be quasi-democratic in that they do have free elections, it's just that the deck is stacked hard in favor one of one political party, so in practice they're a one party state. But yes that's a good example- have a semblance of a democratic system, just restrict who can run
Nope. It's not possible for a small central authority (e.g. one man, or a triumvirate or something) to grok a society of millions well enough to run it efficiently. Might as well catapult 40 tons of aluminum, 10,000 pounds of gasoline, a whole lot of rivets and instruments and wiring harnesses into the sky, along with a pilot and mechanic, and ask them to assemble a working airplane and then fly it to its destination before the whole mess makes a big hole in the ground. The *only* way a society functions efficiently is when the bulk of the decision-making is devolved to a sufficiently low level that the knowledge required is so local and limited that it lies within the power of one or a few human brains to grasp.
But that rules out authoritarianism by definition.
It would if China hadn't started from a position of extraordinary underperformance. Reversion to the mean, eh? Let's talk again when the per-capita GDP of China ($11,000) gets in the neighborhood of Japan ($40,000), since Japan started as a field of rubble strewn with corpses in 1945.
And I agree that the smaller the operation, the better a chance that authoritarianism will work. It actually *does* work pretty well for a platoon, family, or very small business.
Singapore is an autocratic one party state with a GDP per capita of almost $60k (so 50% higher than Japan). It also has a population as large or larger than say any of the Nordic countries.
Every developing country that became developed in the last 100 years started out as an authoritarian government (South Korea, Taiwan, Singapore)- seems relevant. Even the moderate success stories (Thailand, Malaysia) are autocratic. I'm personally very pro-democratic government, but it's hard to miss these uncomfortable facts. I agree that democracies appear to be more efficient (for the most part, excluding Singapore) once you get to a certain level of development
The population of Singapore is only a smidge larger than the single suburban California county in which I live, which is maybe a dozen miles end to end and is run by a simple county council. I'm underwhelmed by a moderate success among what amounts to the population of a largish city. You might also have pointed out that the US Army has a "population" of 1.4 million, give or take, and is pretty much infinitely authoritarian and yet works well.
Given that the list of transitioning countries you mention is strongly weighted to Asia, and Asia has a long tradition of authoritarianisn going back to Marco Polo *and* a fair amount of its development appears to be merely catching up to the European and Anglosphere West, this again underwhelms. It feels more like the US Army example: the path of development for these nations was clearly marked out by those who were in front of them -- Europe and the US, for example -- so like the Army with its clear goals, it isn't super suprising that a disciplined focus on getting the job done, the concrete factories and highways and electricity mains laid, works well.
But this says nothing about how you do well if there *isn't* a clear path, if you're in the vanguard, say. How does a large and polymorphous country like the US, or China, or Russia, or Europe taken as a whole, remain at the forefront of prosperity? The history of transitions from devolved distributed decision making to centralized authority in situations like this -- no clear path, large and complex demographics -- is one of almost uniform failure. It's difficult to think of *any* success story.
china is about as well off as mexico. it's only a success relative to what it was under mao, which at one point literally ordered farmers to melt down their tools so they could export "steel". it doesn't take much to do better than that.
you could have anarcho-monarchism, where the King theoretically has absloute power, but uses his absloute pwoer to delegate most of his power to local leaders. He would still retain abslotue control of things like defense and foreign policy, where the situation is ammenable to understanding by a central authority, and retains the ability to medle in local affaris whenever he likes, he just, usually, doesn’t.
Anarcho-monarchy includes the unstated premise that the king is Aragorn son of Arathorn. "Incentives and threats" generally include Anduril, Flame of the West.
Well if we're going to quote Tolkien, let's get it from the man himself (from a letter of 1943 to his son Christopher):
"My political opinions lean more and more to Anarchy (philosophically understood, meaning abolition of control not whiskered men with bombs) – or to 'unconstitutional' Monarchy. I would arrest anybody who uses the word State (in any sense other than the inanimate realm of England and its inhabitants, a thing that has neither power, rights nor mind); and after a chance of recantation, execute them if they remained obstinate! If we could get back to personal names, it would do a lot of good. Government is an abstract noun meaning the art and process of governing and it should be an offence to write it with a capital G or so as to refer to people. If people were in the habit of referring to 'King George's council, Winston and his gang', it would go a long way to clearing thought, and reducing the frightful landslide into Theyocracy. Anyway the proper study of Man is anything but Man; and the most improper job of any man, even saints (who at any rate were at least unwilling to take it on), is bossing other men. Not one in a million is fit for it, and least of all those who seek the opportunity. And at least it is done only to a small group of men who know who their master is. The mediævals were only too right in taking nolo episcopari as the best reason a man could give to others for making him a bishop. Give me a king whose chief interest in life is stamps, railways, or race-horses; and who has the power to sack his Vizier (or whatever you care to call him) if he does not like the cut of his trousers. And so on down the line. But, of course, the fatal weakness of all that – after all only the fatal weakness of all good natural things in a bad corrupt unnatural world – is that it works and has worked only when all the world is messing along in the same good old inefficient human way. The quarrelsome, conceited Greeks managed to pull it off against Xerxes; but the abominable chemists and engineers have put such a power into Xerxes' hands, and all ant-communities, that decent folk don't seem to have a chance. We are all trying to do the Alexander-touch – and, as history teaches, that orientalized Alexander and all his generals. The poor boob fancied (or liked people to fancy) he was the son of Dionysus, and died of drink. The Greece that was worth saving from Persia perished anyway; and became a kind of Vichy-Hellas, or Fighting-Hellas (which did not fight), talking about Hellenic honour and culture and thriving on the sale of the early equivalent of dirty postcards. But the special horror of the present world is that the whole damned thing is in one bag. There is nowhere to fly to. Even the unlucky little Samoyedes, I suspect, have tinned food and the village loudspeaker telling Stalin's bed-time stories about Democracy and the wicked Fascists who eat babies and steal sledge-dogs. There is only one bright spot and that is the growing habit of disgruntled men of dynamiting factories and power-stations; I hope that, encouraged now as 'patriotism', may remain a habit! But it won't do any good, if it is not universal."
Sure, and if we had a race of superbeings who would exercise that power wisely and with restraint, it would indeed work better. There's no question that *when* you have an unusually wise, restrained, and talented leader, it *does* work better than a decentralized liberal marketplace (of things and ideas). You cut out a lot of waste. This is what always tempts people toward the model. Sort of a Underpants Gnome theory of social success:
1. Set up a system where a wise philosopher king, way smarter and more disciplined than the average human, can bring order and efficiency to society.
This has always been my view as well. The schemes for some sort of kingly academy or selection committee to install an absolute monarchy would, at best, create a situation where the selection committee runs the nation instead of the king (at least, until an unexpectedly-independent king liquidates them). I have similar feelings about plans to put an AI in charge - you essentially end up handing off absolute power to the people who design the AI, in the hope that they don't program it to make them and their descendants quasi-monarchs.
> Are there any practical models for how to run a flexible, competent authoritarian government?
Authoritarian or just not democratic? because those are very different.
> inability to deliver bad news up the command structure, lots of inefficient corruption, very suboptimal ways to transfer power when the strongman dies, unmeritocratic because loyalty is rewarded over competence, and worst of all, rigidity and inability to change over the decades as needed in a changing world
These problems are endemic to democracies as well, except for transfer of power. That's the big problem you need to solve. Historically, the most sustainable systems are something like the dutch republic. Self selecting city councils* that elected executive officials and appointed representatives to provincial governments mostly from their own ranks. The provincial governments in turn selected provincial officials and representatives to a national government. These being early modern institutions, there was a tremendous degree of variability in terms of who got to be a member of what and every rule had numerous exceptions. The system was stable because power was highly decentralized and locally based, and everyone involved was selected from a narrow clique that had a common disinterest in being bossed around by anyone who wasn't a prince of orange.
* not exactly the right term, they were more like a roman senate, an assembly of notables. But again, tremendous variation existed.
You might want to consider taking a look at Violence and Social Orders: A Conceptual Framework for Interpreting Recorded Human History, by North, Wallis, and Weingast. Here's the description from Amazon:
All societies must deal with the possibility of violence, and they do so in different ways. This book integrates the problem of violence into a larger social science and historical framework, showing how economic and political behavior are closely linked. Most societies, which we call natural states, limit violence by political manipulation of the economy to create privileged interests. These privileges limit the use of violence by powerful individuals, but doing so hinders both economic and political development. In contrast, modern societies create open access to economic and political organizations, fostering political and economic competition. The book provides a framework for understanding the two types of social orders, why open access societies are both politically and economically more developed, and how some 25 countries have made the transition between the two types.
It's long and very in-depth, but it'll give you a better idea how and why authoritarian regimes operate the way they do, and why the don't often go away.
You might consider looking into how the government of Iran operates. It's not quite an autocracy, but the supreme leader is appointed for life and is the head of state, etc.
Management consultants like the buzzphrase "Culture eats strategy for breakfast". It applies to countries as well as companies. If you have a bunch of intelligent, virtuous, "god and country"-style administrators, you can organize them however you want and it will probably work. If you have a bunch of kleptocratic psychopaths, you can organize them however you want and it will always be a shitshow. If I were a dictator and wanted to run a flexible, competent government I would not think to hard about organization, and I would worry a lot about hiring and firing the right people.
But then you get to the interplay of organization and culture and how they feedback on each other, and the thesis fails there and organization becomes important.
I'd be very interested in someone giving a deep explanation of how China's system works. I've been trying to read about it, but it all sounds very boring and formal and I don't feel like I understand either the logic of the pre-Xi system or how Xi managed to subvert it.
Agreed. My vague impression is that it's a series of interlocking councils? A council to cover every possible governmental department or interest, then a series of governing councils in a hierarchy, like Russian nesting dolls.
Honestly, I could say the same thing you said about how the EU works? 'I've been trying to read about it, but it all sounds very boring and formal and I don't feel like I understand either the logic'
I am also interested in this. My 30k foot understanding is that at the high levels nothing actually works how it is ostensibly supposed to work, as was often accused of the USSR.
I’m looking for examples in science fiction literature of human-AI melding being portrayed in a way that is especially clever, interesting or convincing. Here are some examples of the kind of thing I mean by “melding”:
-In one W. Gibson novel, there was a being named I think Idoru who only existed online, and appeared there as a beautiful woman. She and a human character had fallen in love, and were trying to figure out how to make human/AI love work.
-In another Gibson novel, characters had moving tattoos of extinct animals implanted in their skin, animation accomplished by some kind of nanotech integrated into skin cells.
-In a Vonnegut novel, a robot with human-level intelligence dismantles itself in despair, I believe because it realizes it is a robot.
-In some random scifi I read long ago, vehicles traveling through interstellar space were guided by pilots who had the ability to experience the space & its various suns, planets, hazards, wormholes etc. as an earth-like landscape: From the pilot's point of view they were piloting a vehicle across mountains, forests, rivers, through storms, around volcanos etc — but all the terrestrial features somehow mapped one-to-one with features of interstellar space, and the pilot’s navigating of terrestrial features and dangers guided the ship through their interstellar equivalents.
Anyhow, it would be useful to hear of writers who are good at this, but even better would be to get some descriptions of AI/human connection or hybrids that impressed you.
Some of the Bolo stories explore this. Later-model Bolos (AI-controlled tanks) can mentally interface with their human commanders to form a gestalt entity with AI speed and human intuition. This also starts giving the Bolos some of humanity's less admirable traits...
In Vernor Vinge's "True Names" the iconic "warlocks" use a "Portal" VR device that enables them to interpret binary data as medieval/fantasy world. This allows them to manipulate the network. There is a further relevant aspect which would be a ruinous spoiler but is basically really cool, especially for the time.
Do uploaded consciousnesses count? If so check out Diaspora by Greg Egan, and The Bobiverse. Not sure they meet the bar for clever, but I enjoyed them a lot!
A guy uploads himself into an AI, jumps out of the box, takes over a corporation with a bit of blackmail, and throws a copy into space. Hijinks ensue. At one point he/it realizes it is running out of storage space (due to a minor accident) and uses a comatose man as a additional storage, which kinda messes up them both, as well as the virtual reality.
It's unfinished, but the story of that AI ,at least, comes to a conclusion.
Thanks, I'll have a look at your story. The last example I gave wasn't as dumb as it sounds in my description. Pilots weren't viewing 3D space as 2D terrain, they were viewing it as 3-D terrain. They went into a sort of trance during their shifts in which they experienced themselves as driving a vehicle over a challenging earth landscape. Their brains had been modified in a way that allowed a feed of info about nearby interstellar space to generate a hallucinated earth landscape whose features and challenges corresponded one-to-one with equivalent features of ship's current space environment. And the pilot's actions when driving on simulated earth also corresponded with actions the ship took, but not in a simple way, not as a duplication of what the pilot did "on earth." For instance, if pilot came to a gulch, that would correspond to some area of interstellar space that could not be easily navigated. If pilot "on earth" chose to turn left and go around the gulf, ship would likewise do something to avoid the area -- but not necessarily by literally turning left, but by something that corresponded to turning left in some deeply valid mapping of vehicle-steering options in certain situations onto spaceship action options in corresponding situations.
But it's hard for me to conceptualize that as the pilot doing anything meaningful. The ship is telling him there is an obstacle, and he tells the ship he wants to go around it, in essence. But the ship isn't giving him accurate information about the obstacle, and he's not really giving the ship actionable advice.
And I have a hard time believing you could translate space to a similar Earth environment. In space you can approach an obstacle from any direction (The enemy gate is down!), whereas on earth, if there is a gorge, you can go right or left or maybe try to jump it somehow, but that's it. And the inertia of a spaceship is got to be radically different than an earthbound vehicle in an environment with gravity and friction.
It sounds like a "cool" idea, don't get me wrong, but it might as well be an AI telling the pilot a fantasy story ala AI Dungeon for all the correspondence the VR world would have with reality.
(I will of course back down entirely if John Schilling tells me it makes sense!)
Hmm. I can *maybe* make sense of it if almost all space travel is confined to a single plane (e.g. the ecliptic plane in the solar system) and you're using the third dimension in your sim-world to represent gravitational potential. The big issue in space travel is not obstacle avoidance - if you completely ignore the asteroid belt while plotting a course from Earth to Jupiter you probably won't even see your insurance rates rise. The big issues are that A: you're trying to hit a moving target, from a moving launch site and B: unless you've got ridiculous amounts of energy to use, you absolutely have to account for and exploit gravitational potential energy.
That's a hard enough problem even in two dimensions, and I don't *think* the proposed visualization hack is going to make it much easier but I could be wrong. Other problem is, really not everything is in the same plane, particularly if you're interested in planetary sites that aren't on the equator. And plane changes are difficult enough that you probably don't want to abstract the third spatial dimension out of your navigational VR just to maybe simplify the gravitational-potential part.
I *really* am not defending the tech in the sci-fi as plausible,. But I do want to give a better account of what the tech was, if only to explain why I enjoyed the book instead of sniggering and throwing it away. The ship was not flying around a solar system. Seems to me that if you were able to do that at a pace that made Earth-to-Pluto something like a Sunday-afternoon drive, it would have worked fine to just let the pilot see through the "windshield," and fly the thing like a jet using a combo of his own senses and relevant data appearing on displays nearby. But in my scifi the ship was traveling at far greater than light speed, through a universe full of dwarf stars, black holes, wormholes, dark matter -- in short, a potpourri of various weird entities scooped from the news of the era when the book was written. So the various hazards and opportunities came up often, as hazards would traveling over wild, difficult earth terrain, and many required creativity and judgment calls to navigate successfully. But the hazards to the space craft were things not visible to the pilot's naked eye. The pilot was trained (maybe with the help of a brain implant) to turn a feed of info about the ship's surroundings into isomorphic problems occurring on a hallucinated earth.
I wasn't claiming that anything like this could work now, or even that it could be made to work in a technologically-advanced future. Was just saying that I found the idea plausible enough for me to get on board with it imaginatively as I read the story, yet weird enough to give me an enjoyable shiver. It was in my sci-fi sweet spot, in other words. I suppose the idea in the background that my mind piggybacked the piloting system onto was what I know about isomorphic problems. For instance there's the 8 queens problem -- how do you put 8 queens on a chess board so that none is attacking any of the others. (There are 12 unique solutions.) There's a way to look for solutions using just numbers -- has something to do with prime numbers and factors, can't remember details right now. Anyway, I thought of the ship's computer as generating, moment-by-moment, earth-terrain problems that were isomorphic to ongoing interstellar navigation problems.
In my (very limited) experience, John Schilling is better at identifying ideas that do not make sense and stomping on them than at extracting and amplifying the value in partially-correct ideas.
I'm sorry if I'm being harsh. A good, thought provoking story can come from an implausible premise, and it's also entirely possible my intuition would be belied by math.
Oh, I didn't think you were being harsh, just maybe misunderstanding me a bit, thinking I was saying the piloting idea was awesome and valid, when really I just meant I got on board with it while reading a sci-fi and had a great ride. (John Shilling though, in my experience, is harsh, but my behavior sample is limited.)
In Ancillary Justice by Ann Leckie, ships are guided by AIs and staffed mostly by human bodies fully controlled by AI. Viewpoint character is one such body that got separated from the ship.
In To the Stars, human politicians meld with ai politicians to form larger political blocks. And human generals became part of their flagship’s Ai in command mode.
If you liked the ACX grants content, and want to do something like that yourself (on a much bigger scale), consider working for Open Philanthropy!
Our goal is to give as effectively as we can and share our findings openly so that anyone can build on our work. We plan to give more than $500 million in 2022.
Roles differ widely in the level/types of prior experience we want, and I'd guess that many ACX readers would be an excellent fit for one or more of them. Current openings:
Business Operations Lead - Manage the team responsible for making sure Open Philanthropy runs smoothly and efficiently day-to-day. We’re looking for applicants who resonate with Open Philanthropy’s mission and are excited to take ownership of building an excellent business operations function.
Program Officer, Global Health and Wellness Effective Altruism Movement Building - As the first hire and leader of this program, the Program Officer will be responsible for identifying grantees, making grants, and developing our movement-building strategy over time.
The Longtermist Effective Altruism Movement Building team works to increase the amount of attention and resources put towards problems that threaten the future of life and is hiring for four roles: Program Associates, a Projects and Operations Lead, a Program Operations Associate, and people to take on Special Projects. We're looking for candidates at varying degrees of seniority with a strong interest in effective altruism and longtermism.
We are https://hookelabs.com, a family-owned company (15 years old, ~50 people but growing fast) based in Lawrence MA USA (30 min north of Boston/Cambridge). Our focus is research on autoimmune diseases (multiple sclerosis, colitis, arthritis, etc.), but we’re also branching out into development of scientific equipment.
You’d be the third regular SSC/ACX reader here (that I know about).
About 80% of the work we have now is in Python/NumPy, with another 15% in C (or Rust if you prefer), and 5% “other” including Google Apps Script. You don’t need to be able to do *all* of that.
The Python/NumPy work is on PCs and Raspberry Pi. The C/Rust work is on microcontrollers.
We have a lot of different projects, large and small. These include:
• Image analysis in Python/NumPy
• Embedded systems work on Raspberry Pi and microcontrollers
• Web-based UI development for scientific analytical equipment (mostly image related)
• Act as mentor to other sw developers
We could also use some help with IT stuff – we have a full-time IT person but he’s pretty overloaded. (We run Windows networks.)
This is a good position for a person who gets bored easily - you'll get to juggle projects, to some degree, to your taste, so long as they all move forward at some reasonable rate eventually (we don't have hard deadlines on most things, just stuff that needs to get done).
I don’t really expect one person to be able to do all this stuff, but the more you can do the better.
I’d prefer a full-time, on-site person, but we’ll also consider part-timers and people working from home (part of the time). Hours and most other things are very flexible. We offer all the usual benefits. We pay well and expect high performance.
To apply send a CV to <jobs (at) hookelabs.com>; put “Software Engineer” in the subject line.
Would you consider bringing in someone on a visa? As I posted in a later post, I grew up in Russia (but moved to the US long ago) and statistically probably know people who would be interested in having a tech job outside of Russia.
Yes, we'd consider it - if we knew how. We've never done that before and I suspect it's complicated and difficult. (We do have some people working on some kind of time-limited student visas, but I think in a couple of years they'll have to leave the US unless they get a green card.) We do help people get green cards to the extent we can (helping with legal fees and letters, etc.) but again we don't know much about the process or have any control over what US immigration does (I do wish it were otherwise!).
Another thing we'll consider is remote work - if a good person is in Russia (or elsewhere) and wants to work from there, we'll give it a try.
The comment about Saudi Arabia is interesting. I'm used to thinking of Saudi Arabia in its capacity as a petrostate / mideast US ally / repressive regime. Easy to forget that it's also the spiritual center of a major religion and that this has significant consequences for world events.
Wow. Thanks from me too. Somehow I had missed this.
This part really got my attention:
“But if this scene was to be believed, it turned out that terrorists didn’t need a learned debate about the will of God. They needed their spirits broken by corporate drudgery. They needed Dunder Mifflin.”
It's sitting at 99%. That generally means that "this event already happened". But, the question is not resolved or even closed. What's going on? One comment mentions a column entering Obolon [edit: fixed typo], but I have trouble finding images or video that might hint as to the total number of troops.
This question is ranked fairly low on Metaculus's list when you're casually browsing. My first guess was that people who voted on this 3 weeks ago have now just forgotten to update. What obvious fact am I missing?
That seems like a pretty silly question. It's quite possible that >= 100 German troops, under German banner, entered Moscow by the end of 1941 - you'd need to dig deep into the TO&E of Wehrmacht motorized reconnaissance elements, casualty reports and operational maps that may no longer exist, and be clear of your definition of "Moscow", but there's a fair chance that, yeah, for a few hours on the very outskirts of something that could be reasonably called Moscow, that happened.
It's roughly the equivalent of counting coup, good for a few cheap status points but changing nothing that matters. And, yeah, fog of war means resolving a blip that small is going to be tough. If Metaculus had existed in 1941, the Moscow version of that question would *still* be open.
Fighting is currently in Irpin, which is right in front of the boundary. Note that between Irpin and actuall urban housing, there is a lot of empty space (edit: actually it is probably forest) within administrative city limits.
From the eastern side, there are "Heavy clashes reported near Brovary" (https://liveuamap.com/en/2022/14-march-heavy-clashes-reported-near-brovary), e.g. also just right in front of the another, even larger, empty space (edit: actually it is probably another forest) within city's administrative limits.
Obolon (not Obolev) is on the map as district within the city, but I do not think that it is proven whether Russian troops already entered it. Perhaps it was just incorrect reporting.
>A repelled attack on Kyiv still would count, provided it could be ascertained to a high degree of confidence that at least 100 Russian troops were within city limits.
I don't know where exactly the city limits are, but IIRC Russia tried to move some columns straight into the city in the opening days. Probably one of them got close enough to meet the conditions.
Edit: I don't know why it wasn't resolved yet, given this - maybe it's a fog of war thing and they're not 100% that over 100 soldiers crossed the line?
Would anyone have recommendations for a psych in the south sydney area or in general if they're willing to do online? Betterhelp gives me like 90 a week estimates for a sub which would be fine if I had some sort of gurantee I'd get something out of this and that it wasn't a thing that I think will probably drag on for a long time.
Thinking of trying the UTS student clinic if anyone knows anything about that or has more recommendations like that.
A history of Western spies (like, working for the USSR) during the Cold War. I'm much more interested by actually ideologically motivated spies like Kim Philby/the Cambridge Five or the Rosenbergs, than just normal boring non-ideological types who were paid off for info. I find the whole topic fascinating, and it sort of reflects how much the Cold War was really a clash of belief systems. I suspect in order to be good, the book will need to be written by a conservative- while I am not personally politically conservative, I doubt that someone on the left is going to be as rigorous on the topic.
And, a history of Japan's economic rise and fall in the 80s and 90s? I understand the basic outlines of the story (the MITI department running postwar industrial policy to great success, the Plaza Accords, the eventual commercial real estate crash, etc.), I'd just to love learn more, preferably from a source a few intellectual grades above 'airport bookstore business writing' level of thought
The strong version of the basic story about MITI industrial policy being a success is contested. Japan was the first to get right what many others since did as well - focus on comparative advantage, enable your firm ecosystem to export successfully and require firms to clear the market test. To this extent their industrial policy was definitely a success, and South Korea and China successfully followed that paradigm later. Anything Japan did over and above this, in terms of trying to pick winners among firms, particular directions of investment and micromanagement of firms, is not universally regarded as successful. Honda chairman for instance had said that MITI wanted to restrict them to two wheelers and Honda succeeded inspite of, not because of MITI.
The book "The Venova Files" is about Soviet Spies. What happened, was an old spook went to the Kremlin, and with a Kremlin guide and was researching restricted Kremlin archives for a book, when the Soviet Union fell apart. Since there was no one there to watch over him, he just copied all the archives he could until someone finally kicked him out of the Kremlin. The book basically exonerates Joe McCarthy's hunt for a commie hiding behind every cornflake ... yes, Joe McCarthy saw commies in his cornflakes, but there really were commies in his cornflakes.
The book consists of a bunch of short stories about every file recovered from the Kremlin archives.
I've been thinking about nuclear deterrence a lot lately.
One of the hardest problems with MAD is, once nuclear weapons are in the air, what do you do? Your half of the world faces imminent destruction. Then you have an awful choice between reprisal -- ultimately ending humanity -- or submitting to your fate, knowing you saved the human race. (In this toy, oversimplified example.)
States who are definitely willing to irrationally fire second are the least likely to be hit by nuclear weapons. You might try to just fake it, but the adversaries sometimes steal your secrets, and will definitely read your secret plan to "not really fire but pretend like you will." So you should aspire to genuinely believe you are a state who will irrationally fire second, that's the best way to avert nuclear catastrophe.
You might set up two file cabinets worth of secret plans, to be opened only in these emergencies. The first box has the plans for retaliation. If you open the second box, it has the plans for staying your hand at the last moment to preserve humanity, since the window for affecting the outcome has already passed.
I don't have much to add that hasn't already been written. Except for the recommendation, in strongest possible terms, that we stop calling this "mutually assured destruction" and instead refer to this dilemma as the "nuke 'em paradox."
>One of the hardest problems with MAD is, once nuclear weapons are in the air, what do you do? Your half of the world faces imminent destruction. Then you have an awful choice between reprisal -- ultimately ending humanity -- or submitting to your fate, knowing you saved the human race. (In this toy, oversimplified example.)
No. Your choice is between shooting back, half your population dying, half their population dying, and someone else inheriting the Earth, or not shooting back, half your population dying, you getting conquered by the followup invasion, and the one who called your bluff inheriting the Earth.
Ord didn't put nuclear war as a 1/1000 probability X-risk because he thought there's only a 0.1% chance of nuclear war this century; he put it at 1/1000 because absent something fatally wrong with our models or an enormous nuclear buildup (to many times Cold War arsenals) we can't actually end humanity that way.
A full nuclear exchange probably wouldn't end humanity. And you could argue that a few million people in the neighboring states of your enemy dying from fallout is worth it to prevent a country willing to launch a nuclear first strike from being the dominant power in the world after having nuked america.
States don't psychoanalyze each other. It's futile and dangerous. You plan your deterrent based around the other side's capabilities, and they plan theirs around yours. So even if a state swore up and down they would never, ever, actually use nuclear weapons, nobody responsible would ever believe that.
The Soviet Union swore it would never use nuclear weapons first. Nobody believed that, indeed the entire complex early-warning and fast-response apparatus of NORAD/SAC was created *because* we didn't believe it -- you don't need to be on a hair-trigger alert if you are 100% confident the other guy will never shoot first.
Modern Russia swears in print it will never use nuclear weapons unless the very survival of the state is at stake, and certainly not just because the survival of Vladimir Putin, or of his pride, is at stake, and the whole reason the world is concerned today is because nobody believes that either. (Indeed, arguably Putin has been frustrated of late *because* it seemed too many people were believing Russia would only use nukes to save itself from ultimate extinction, and he has taken steps to restore a certain amount of scary ambiguity on the point.)
Finally, a full strategic nuclear exchange between the major powers possessing nukes would hardly spell the end of humanity. It wouldn't even spell the end of the respective countries. The truth is much less dramatic and more squalid: something like 20-50 million people would die, immediately or relatively soon, another unknown number of millions would perish in the drastic economic and transport breakdown that followed, and all the nations participating would be reduced to non-major power status for generations.
But India, Brazil, Chile, Indonesia, New Zealand, South Africa, and many other countries will be fully functional, albeit plunged into quite a serious economic shock by the immolation of the world's biggest markets for a period of years.
The "destruction" in "mutually-assured destruction" was never, by its students, meant to imply "every last human being dies" or even "civilization ends" but "my country is reduced to a pale shadow of it fomer self," something as prostrate and miserable as Germany in 1945 or Kharkiv now. This is bad enough that very few national interests can justify its risk, but it does not further imply one need ponder profoundly existential questions about ending all humanity.
A second point is that there is one scenario where a nuclear exchange equals doomsday, and that's widespread use of cobalt/salted bombs. Thankfully this seems to be one of those rare nuclear avenues that nobody has been mad enough to explore in too much detail.
The bigger problem for the world at present (at least, to my thinking) is the issue that setting off an economic/technological dark age by zapping the most advanced and economically productive parts of the world overnight will probably prevent us from ever rising back out past a roughly 18th-century level of technology overall. We simply don't have the easily-available energy sources or surface ores you'd need to re-start the industrial revolution.
In the event of nuclear war, then, we'd better hope that enough remains of the world's trade and industrial infrastructure to keep the remains of the world economy ticking over.
I think you're overestimating what would be lost. There isn't that much actual physical stuff that would be destroyed, or rendered unusable. The main destruction would be human lives, and the network of relationships and agreements that are the underpinning of modern complex highly-specalized life -- the kind of network of relationships and agreements that let me work in an incredibly specialized field in front of a computer, trusting that other people will dedicate themselves to keeping the electricity on, and delivering food to the store so I can buy it on a just-in-time-basis and not have to worry about planting my own potatoes and harvesting enough in the fall to get through the winter on my own. Flung back on our own resources, every man needing to do everything for himself, be an amateur everything, would reverse the enormous efficiency of specialization and cooperation, and make us much poorer. Which is the harm done.
But it's not part of the physical world, and there's nothing that stops it from being rebuilt. You just need new people, and you need to re-establish the networks and understandings. It would take time, for sure, but there's no irreplaceable physical thing that would be gone that prevented it.
I think that losing that knowledge and network of trade and relationships is precisely the problem - it's a recipe for a bronze age collapse scenario. Which would ordinarily be fine - societies decomposing into smaller, less integrated and less organisationally/technologically sophisticated units is often a relief for the poor bloody peasants slaving away to keep all of those specialists above them fed (for the specialists, of course, it's a tragedy). But in our case we simply can't afford to lose the ability to build complex machines, move stuff around the world, or dig hydrocarbons out of the deep.
Industrialisation (which was a contingent rather than a deterministic event in any case) relied on what amounts to a huge cache of untapped, easily-available energy. We don't have that anymore. So, at least until such time as energy generation and manufacturing can be rendered more local, we simply can't afford a general collapse and reversion to smaller, less integrated polities.
I was thinking about this problem recently. It's very reminiscent of Newcomb's paradox. It seems like if you want to "win", you should one-box, i.e. press the button. This is the general idea of functional design theory. If hackers demand a ransom in exchange for not releasing stolen data, and the ransom is not paid, they usually release the data. And if the ransom is paid, they usually don't release the data. Even through releasing or not releasing the data provides them no short-term benefit, they want to set the precedent that some hackers are good functional decision theorists who stay true to their word, so that future victims will then also evaluate their options in terms of FDT and pay the ransom.
But maybe the idea behind FDT kind of breaks down if your decision literally ends humanity? Sure, over the entire length of an iterated prisoner's dilemma where agents can think about the long-term consequences of defecting, they may just decide to cooperate, but if the very action of defecting terminates the iteration of the dilemma by eliminating the other agent, the assumptions probably fall apart.
That is why ideally this system would be fully automated - so that if you are attacked, you automatically retaliate, and your enemies know that you will automatically retaliate.
One interesting parallel to Newcomb's paradox is that it is an exploration of a sufficiently smart oracle which can determine whether you're likely to one-box or two-box (which seems plausible as long as you don't need absolute perfection and are satisfied with merely high certainty); and in a similar manner, the state system will try (and must try, according to game theory and MAD) to put in a lot of effort to detect whether you're the type of person who will "press the red button" or avoid retaliation, and in the latter case ensure that you get some other position where you won't need to make that choice. Like, even if only a minority of people are psychologically capable or willing to actually retaliate, I would assume that the nuclear command would intentionally have been selected to consist of those people.
See, Deaths's end (book 3 of the Three Body Trilogy), and how putting someone in charge who isn't willing to pull the trigger can actually lead to armageddon. (Yes, yes, I know, fictional evidence and all, but thankfully there isn't any real evidence to reason from). See also Bret Devereaux's most recent post over at acoup.
I don't think that your simplified model of MAD reflects reality because it's not really half of the world throwing nukes at the other half of the world, it's 25% of the world throwing nukes at each other and most of the world staring at it with horror in their faces. If we're talking about "ending humanity" and half of the nukes are already in the air, then IMHO 5000 vs 10000 nukes will not make a qualitative difference of ending humanity or not - the question is whether you nuke the culprits, but the harmful impact on e.g. Africa, South America and much of Asia will be qualitatively similar no matter if you push the button or not. Also, the choice does not need to be made right now - a large part of the deterrent of major nations is planned through "second strike" retaliation capability e.g. nuclear-armed submarines which may fire their missiles after the first strike has hit and the geopolitical consequences have been seen.
Like, the existence of the human race is not really at stake (see the post linked above considering nuclear winter as exaggerated, especially as we have literally 10 times less nukes than when cold war estimates of consequences were made), but the existence of *your* "world as you know it" is. Or, in some cases, if you surface on your sub and see that your homeland doesn't exist anymore, you may follow the orders and enact retaliation - again, not against the majority of humanity that's not going to be involved.
I worry about the Endowment effect applied to the Russian invasion. The more Russia/Putin pour in to the war in blood and treasure, the more urgent it becomes for them to win and the more they will escalate. In turn, the more escalation, the greater the pressure for the US to get involved.
Nah. We can sustain current levels of support forever without breaking a sweat. A lot of nations have come to grief underestimating the productive capacity of the United States. Worth remembering we make more weapons than the entire rest of the world combined, and even in the most peaceful years we make a strong effort to sell tons to foreigners to clear space in the warehouse for next years' model.
Besides, the opportunity to do battle testing of anti-armor, early warning, and portable SAM systems should not be wasted. This kind of stuff is gold for the gnomes back at Raytheon, and a bunch of the folks in the Pentagon basement are all watching eagerly, too. Seriously, when was the last time it was possible to field-test weapon systems and protocols designed to neutralize Russian assets against actual Russian assets operated by actual Russians? Highly useful.
I meant Western political support more generally; in the case of US I expect sanctions are the first thing that would be questioned (I expect more immediate problem in the EU with respect to refugees, but that is admittedly beyond the scope of OP)
Why? If it doesn't cost us much, why should we care if sanctions on Russia go on forever? We don't much care about the sanctions on Iran because it doesn't cost us anything noticeable.
I've yet to see a good argument that sanctions on Russia will have any seriously noticeable effect on the American pocketbook. In 2019 US imports from Russia were $22 billion, of which the largest categoriies other than oil (oil is about 60% of the total value) were precious metals at $2.2 billion and iron at $1.4 billion. Those are rounding errors for a $20,000 billion economy. The oil imports are a single-digit percentage, and are readily replaceable by domestic supplies, if the price oif oil is more than ~$60/bbl or so, where fracking breaks even.
Basically Russia just isn't very important to the US economy. We do more business with Chile and Vietnam than we do with Russia.
Main significance of US sanctions is not direct US-Russia trade (which is indeed tiny), but that they make it very difficult for Russia to export anything anywhere, just like it happened with Iran, since US has such dominant position in global financial and similar services.
And Russia is an important exporter of many commodities whose prices are determined on global markets, so when Russian exports are missing, global prices spike; and that is indeed happening and it will affect even US industry.
Ad oil specificaly, I think there will be reluctance to invest in expanding production at least in the "West", given that there is an expectation that governments will be seeking to shut it down as soon as possible due to climate targets.
None of this means that continuing sanctions would be somehow crippling to US economy; but it certainly means there is going to be pressure to make them softer; perhaps first informally, by not going too hard after workarounds etc.
I don't think so. You're right that some small countries somewhere or other might be priced out, so as usual folks in the Third World have a good reason to curse this rivalry, but it isn't going to change things here sufficiently to notice. I don't think commodity prices are volatile because of Russian sanctions, or fear of same, but because of inflation -- a prexisting problem -- and the fear of what *else* might happen. A much bigger war is a far bigger concern than any amount of sanctions to people who trade on the European or US markets. Mind you, I'm not suggesting people who trade European equities don't have real concerns about the impact of sanctions on European markets -- that's a whole different story, particularly for Germany.
Ha ha, no, there is zero reluctance on the part of investors to put their money into oil and gas production when the price of oil is this high. You'll notice XOM is at a multi-year high? They're awash in investment capital, and so are all the middle-sized guys. Certainly they are a little concerned with what climate-control nostrums might roll out of Washington -- this has been a constant source of mild concern since 1975 or so -- but (1) historically not much ever has, and (2) the Democrats are not suicidal enough to propose anything serious during a big run up in gasoline prices, and (3) the Democrats are going to be slaughtered at the polls in 6 months anyway.
So I don't buy the argument at all. Indeed, in my entire life, I've *never* seen serious domestic US pressure to ease any sanctions for practical economic reasons, and it would be a big surprise if it happened this time, for the first time.
Not much issue from refugee, at least for now. I don't expect to change shortly, there is absolutely zero traction of the traditionnal anti-immigration political discourse in this particular case. Ukrainian are welcomed in eastern Europe en masse, and in western Europe are not causing any kind of ideological resistance, only some practical details and logistic difficulties.
There are many reason for this: it's seen as temporary, people moving are of more or less similar culture, from immediate neighbors (seen from eastern europe at least) without previous stigma, this war fit in previous cold war narrative which is still very present in older and/or eastern segment of Europe population (exactly the one that is not too fond of immigration in general), and it's mostly old folks, women and children (while the previous waves where mostly young men).
It may be racist, but it's also the reality: the syrian-war immigration is so different on so many point that you should not expect the same reaction this time. This differentiated reaction is not only specific to Europe, you can see it in most immigration waves all across the world.
Word among sources I expect to be knowledgeable (e.g. https://twitter.com/kamilkazani/status/1503053699798769666 linked by Marginal Revolution today) is that the more relevant effect driving escalation is status / legitimacy. Putin started the war in part to cement his domestic political position and will likely face a serious uprising or coup attempt if he can't sell it as a clear victory.
Given that the stated casus belli was bullshit anyway though, can't he withdraw in exchange for bullshit concessions and call it a victory? "We successfully de-Nazified Ukraine, hooray"
If Russia withdraws today, there are going to be a hundred thousand Russians going home with the firsthand knowledge that Russia didn't de-anythingize Ukraine. And they're going to talk to their friends and family. Some lies really are too big.
If propaganda was that easy everyone would do it. :-)
The stated casus belli isn't really for consumption by the people who could cause trouble for Putin (oligarchs, generals, politicians, etc.). It's for Russians who only know about the war through state TV (and maybe also a little bit for particularly gullible Westerners). On this theory, Putin embarked on the war to shore up his support among the first group, precisely because it would be hard to fake an achievement like that. If he wins it signals competence, "grip on power", and the benefits of keeping him around (I'm sure many in Russia really would prefer a Russian-sphere Ukraine to a NATO-sphere one, other things equal). If he tries and fails... well, that would signal the opposite.
We have already seen Russian government backpedaling on some of their claims in national TV - e.g. now suddenly they say that change of Ukrainian government is not a requirement, and it never was one. For "internal consumption" Russia has no problem effectively issuing convincing statements like "Oceania has always been at war with Eastasia" even if this directly contradicts something they said earlier. They are good at it, they have all the tools, and it works.
So I'm fairly sure that if Putin decides to make some agreement, that the propaganda machine can sell to the Russian public that that this is a big victory; and that it's the exact thing that Russia initially wanted to achieve. And in peace talks, Ukraine and the west would likely cooperate by inserting some kinds of concessions that are practically insignificant but have symbolic PR value that help Putin save face. Those in Russia who believe the TV will buy it, and those who were always skeptical will just be happy that there is peace.
Looking back at things from twenty years later, the answer seems to be "no".
Back in 2002 (at least as I recall it; human memory going back two decades is quite fallible), there were lots of people (on OpEd pages, in opinion/policy magazines, and on these new-fangled things called "blogs") who expected that US forces remaining anywhere on the Arabian Peninsula would continue to cause the irritation that led to 9/11 whether or not they were inside Saudi Arabia's official borders.
This was not an opinion, mind, that was strongly associated with people either for or against the idea of invading Iraq. There was an op-ed of the era (at least, again, as I recall it) that could be summarized as "Iraq had nothing to do with 9/11, we don't need to invade Iraq, invading Iraq is stupid, to stop another 9/11 we just need to get our forces entirely off the Arabian Peninsula (specifically including out of that new airbase in Qatar)."
Is immortality good from a utilitarian pov because it means more years to enjoy more hedons or bad bc of the law of diminishing returns (things become boring, less urgent, less meaningful, etc. once I know I’ll live forever or live for much longer)?
Even if boredom reduces the utility gained from experiences, it seems unlikely that it would make the utility *negative,* so over the long run the positive experiences you have from immortality should outweigh any losses from not being driven by your mortality.
I think future people will be able to modify themselves so they don't get bored unless they want to, so it means more years to do whatever you want, explore things, and self-actualize.
I'm surprised that kind of mind/body modification doesn't come up more often in discussions of immortality. If you can suppress boredom, then you could just do a thing that makes you happy forever as long as entropy or ill intent don't get you.
> If you can suppress boredom, then you could just do a thing that makes you happy forever as long as entropy or ill intent don't get you.
This line of thought seems like it is isomorphic with you just modifying yourself to be perfectly happy all the time while not doing anything at all, which is not something I (the current, unmodified, version of me) would consider a good outcome.
A professor in college had a long discussion with us about a similar topic. If someone could be hooked up to a heroin machine (or pick a better drug for this) and just live in a constant state of...I guess we would call it euphoria or something like that, would that be a good life.
The strong consensus was no, but it was hard to separate the hidden variables (like the people who had to run the hospital, and the resources expended so that some could live like that).
What kind of immortality? Involuntary immortality seems like torture under pretty much any values system, while voluntary immortality seems much more likely to be good. My personal feeling is that voluntary immortality is likely to be substantially net positive from a utilitarian POV, and my guess is that most utilitarians would feel similarly. The "more years to enjoy more hedons" effect is clearly good, while it's not obvious to me whether the second-order effects are good or bad. Yes there's likely to be more boredom with immortality, but there's also less loss of loved ones and more time for a truly deep exploration of entire fields of study, kinds of art etc. Given that there's one clearly good effect and one ambiguous effect, I'd lean pretty strongly towards it being good.
Why, if you lived forever, you could have sex with every person whom ever existed. Or a meaningful relationship. We could each spend ten years married to Hitler, and find out what makes him tick. Imagine the possibilities. We would either find out more than we ever wanted to know about Hitler, or we'd find out more than we ever wanted to know about ourselves.
it's a fairly rudimentary observation, and not very philosophically sophisticated, but QALYs is a very relevant point here. Furthermore, I imagine suicide remains an option.
More straightforwardly, while I theoretically understand the question, boredom and lack of urgency hardly seem like a problem to me; it's a big weird world filled with lots of problems to solve, challenges to overcome, pleasures to indulge...
Scott, banal but important question about the Book Review competition.
I submitted my entry last week, and I got a one-line acknowledgement from Google Forms that my "reply had been received". That's it. Should I have expected anything more? Is it going to be possible to know at some stage if an entry has been received or not, and is being considered?
I definitely am not checking that Google Form and wouldn't have sent you further acknowlegement. It might be worth me doing that at some stage, but for now don't worry.
I've been reading his book, The Invisible Plague: The Rise of Mental Illness from 1750 to the Present.
The book goes over Torrey’s thesis, that the prevalence of insanity, which was once considerably less than one case per 1,000 total population, has risen beyond five cases in 1,000.
He goes through (in really impressive detail) the records on insanity in England, Ireland, Canada, and the United States over the last 250 years.
I didn’t know if anyone had read the book, and found any obvious refutations?
I've found it pretty convincing thus far. I’m considering doing a book review on it for the book review contest if I get some time.
With things like #1, it's hard not to think about it as the EA movement just hiring a marketing person of sorts. To see what I mean, go to the page and scroll down and read the rules. You will find they've already marked down what they think the most important issues are, the right moral system for approaching them (utilitarianism), and so on. They link from the rules to a site which even cranks down on what kind of writer they want, which is broadly someone who is basically Scott-like in most respects.
I don't actually have a problem with any of this; it seems like "let's pay for more dialogue, especially around this thing we think is great and great for humanity" doesn't seem like a bad thing to me. But you'd still be surprised to find, for instance, that they didn't end up giving it to someone "basically like" Applied divinity studies, perhaps more focused on one of the pet issues.
I could be completely wrong about that - lord knows I don't know the individuals involved. But I still look forward to more people getting into the writer-grant game; right now it's basically this promoting EA, and another one that promotes being a spread-sheet and economics enthusiasts, or nothing. I think whatever minor discomfort I get from stuff like this gets solved as the diversity of grant-payers grows.
I think those are mostly suggestions, aimed to point at the area they want to people who might not know what effective altruism is. My guess is ADS is in the category of things they would approve of. I know he has already gotten an Emergent Ventures grant and he may have gotten others.
ADS is sort of my mental model example for where this money ends up going. They are "too old" to get the grant by the rules of the contest, but I think someone who writes pretty similarly and is sort of explicitly-EA-aligned in the way they are sort of explicitely progress-studies aligned probably runs a pretty good chance of getting the nod.
I have no problem with this group's selection criteria at all, except to the extent anybody you'd expect to win Tyler's grant would probably run a pretty good chance of winning this one too, and they are pretty much the only grants that exist.
I'm not disapproving of EA people here; they are hiring a person to take the right positions as they see them, within a certain range of varience. They have their money where they mouth is proving their sincerity/commitment to those ideas in more ways that one. I'm just surprised there aren't more groups doing it.
Basically there's two groups who subsidize people talking about Utilitarianism, trying to ensure moderate amounts of paperclips and talking about economics are the way forward. I'm looking at this and going "OK, here's two groups who will pay for bog-standard bay area gray triber views to be promoted. Who is paying for literally anything else?" and there's 0 entries in that field.
A long time ago, a bunch of conservatives started paying to promote/encourage young conservative legal minds in a bunch of different ways, and that's had a noticable effect; basically everyone goes "yeah, the world is a lot different and better for conservatives because they invested in this way". EA here is paying for the same thing, but with writers instead of lawyers; with enough money and on a long enough timeline, they will end up controlling thinkpiece-SCOTUS, so to speak.
It's just weird to me that more groups aren't doing this, especially when it's so cheap (by "lots of tech money" standards) to do so.
I think all my opinions on this instantly change if I find it it's been tried and failed in the last 20 years or so. Entirely possible someone's tried to manufacture thought-leaders in this way and failed, and EA just didn't get the memo.
I've been thinking about a consequence of increased healthspan. While some people sort of have the same years gain of experience over and over, we can assume that some will continue to learn and to become more skillful at the things they care about.
There's been very little about this in sf-- people would rather write dystopias. Also it's hard to imagine what even as little as an extra 50 years of learning and practice could do for people-- about as hard to write about as an accurate portrayal of increased intelligence.
There are a couple of possibilities. One is a limited longevity-- it turns out that people stall out at a couple of centuries. No matter when you were born, you have a chance to be among the top pianists in the world. You just have to work and wait. Same for something like running a major museum.
The plus side (though it's rough for ambitious younger people) is that the level of skill in the world (probably including skill at teaching) is going up.
The other possibility is that there isn't a limited lifespan any more. People are likely to just keep going on, though there are presumably issues with memory and possibly with boredom.
I assume people will develop new skills and hierarchies to be at the top of.
Recent sci-fi goes through incredible contortions just to _avoid_ lifespan extension and transhumanism in general, because the consequences to the setting outweigh whatever theme the novel/rpg/etc was supposed to be about.
This is the biggest elephant in the room. Seeing small children today and realizing there's an above-coinflip chance _they will never grow old_ blows my mind.
There probably is an xrisk or two, but I'm not counting it.
We have an ongoing biotech revolution and AI revolution that's just getting started. Both were futurist joke fields 20 years ago and now even normies know something's up.
Children born this year will be 60 in 2082. I estimate us getting through the technological (but perhaps not legal & mass production) hurdles of radical lifespan extension somewhere between now and 2100. Based on how AlphaFold caught me by surprise and casually solved an impossible problem, I'm ready to believe anything.
I think the correct path is to change direction. I retired at 55, and returned to the University for a different career. Now I have a completely different career path, I see the whole universe as new, even though I'm 60.
I'll mention Larry Niven, 'booster spice' and Known Universe series.
I see aging as our answer to a trillion cells and cancer... (Unconstrained division of one of those cells.) So I don't think there will be any big change. As I've gotten older (63) I'm seeing death as partly a good thing. Sure I want to see what's going to happen, but also I can get out of the way and leave the world to the next generation, (and more selfishly, my assets to my kids.)
"Rainbows End" by Vernor Vinge is an itneresting near future scifi take on this, which explores old people dealing with being unexpectedly healthy, but not having a clear role in society anymore.
I think a lot depends on whether or not they become "old" in various psychological/cognitive ways. I'm not talking about senility per se, but the ways in which I experience my own mind changing. And of course what an artificially long lived person experiences might be something unlike any life stage of a person experiencing normal aging. We don't know.
What I don't expect is for their life to be same as usual, only more of it. I.e. Heinlein's series of novels featuring extreme longevity is just plain wrong. (He has a character who's been in the same career "since first maturity", as she puts it, in explaining why she's the top of her profession at a mere 2-300 years of age.)
I think the most interesting fictional depiction I've seen of an extremely long-lived person is actually Doctor Who (which is ironic, since the world-building of Doctor Who is otherwise rather terrible).
1) The Doctor acts like an adult among children. He makes important decisions on behalf of the people around him without even asking their input, and sometimes against their objections. He regards anything that goes wrong as his fault for failing to control it. He views himself as much smarter than everyone else--not in a prideful way to bolster his ego, but simply as a fact.
2) The Doctor doesn't update his life philosophies for anyone or anything (even when the flaws are fairly obvious), because he already considered and rejected all the counter-arguments ages ago. The current crisis hardly weighs against his many lifetimes of experience.
3) The Doctor is a master of arcana. No matter what situation he finds himself in, he always has some esoteric bit of scientific or cultural knowledge he can exploit to create new options and get himself out of a bind.
4) The Doctor has already seen everything, so he spends his time showing the wonders of the universe to young people who have never seen them before, so he can experience their excitement vicariously.
Here's another, but perhaps less thoroughly worked out.
"I fear Benedict. He is the Master of Arms for Amber. Can you conceive of a millennium? A thousand years? Several of them? Can you understand a man who, for almost every day of a lifetime like that, has spent some time dwelling with weapons, tactics, strategy? All that there is of military science thunders in his head. He has often journeyed from shadow to shadow, witnessing variation after variation on the same battle, with but slightly altered circumstances, in order to test his theories of warfare. He has commanded armies so vast that you could watch them march by day after day and see no end to the columns. Although he is inconvenienced by the loss of his arm, I would not wish to fight with him either with weapons or barehanded. It is fortunate that he has no designs upon the throne, or he would be occupying it right now. If he were, I believe that I would give up at this moment and pay him homage. I fear Benedict. "
One way to look at this is that the societies described in fiction are a very biased subsample out of all possible/plausible/likely hypothetical societies, namely, societies that are (a) interesting - not obviously permanently utopian or dystopian, but where a fiction story with interesting strife can happen; and (b) reasonably easily explainable to reader through a story.
If we would intentionally try to perform a systematic review of hypothetical futures for some practical purpose, I think that we would consider many options that a fiction writer would discard because of how it does/doesn't fit the needs of storytelling.
I mean, from the youth's perspective, it's pretty dystopian when there are no jobs because the current occupants live forever and only get more experienced as time goes on. The only recourse is to aggressively reproduce to create demand for your services.
That's why I think massive increases in longevity will be a big driver for space colonization. Faced with either sharing power and influence with the younger folk, or giving them the means and opportunity to go off and form their own small ponds to be big fish in, the folks in charge will tacitly favor the latter.
In my story, the first space colonization effort is crewed by the future equivalent of second sons of nobility - people who are high status enough to go on such a prestigious venture, but who have no actual chances of advancement at home and thus are willing to go on what is essentially a suicide mission (even if it succeeds, it's a one way trip).
This is under the assumption of no FTL. There's no way the masses will ever come close to getting the resources required to attempt interstellar travel.
Well, in *my* imagined SF story society where aging is solved, society stagnates due to the "science advances one funeral at a time" effect and the ability of elites to entrench themselves in power indefinitely. Also, safetyism is turned up to 11 because people have more to lose.
Reminds me of "Icehenge" by Kim Stanley Robinson! (Or the Mars series, but I like Icehenge better-- more adventurous and much more concise)
TL;DR: lifespan is extended by an order of magnitude but people still "peak" in their 30's-50's, with interesting effects on academic norms and progress.
That's the usual idea, but is it wise or healthy to only assume the worst?
One possibility is that science stagnates, but the arts keep improving.
Also, we don't know what age people stay at. Maybe science advances one funeral at a time (is that true?) because people get mentally rigid with age. If people are long-term 20 year olds, maybe things will be different.
What does it mean for the arts to "improve"? "Art" is defined by trends and fashion, so unlike science it doesn't improve or deteriorate, it can only change.
Also, people are just less motivated when the future seems so long. This is a world where people think nothing of slaving away for 50 years on a PHD, and even then they won't be able to get a job until someone higher up kicks it, which almost never happens.
In my anecdotal experience in college, women seem to give themselves a much larger work load then men. This being done usually through having more majors or minors. I frequently would hear from my female friends how busy they were, them majoring in everything from theater to business to chemistry. I would hardly ever hear from my male friends how busy they were and they usually didn't seem to be under as much work load in those same areas.
I've also heard (but have not dug too deeply into) that women make up the majority of graduate degree and PHD earners now as well. Extended schooling brings much more work. I just wonder why women seem to give themselves more work to do in college where I see men not have to do as much. The sex differences could be easily explained away as women are more vocal about the amount of work, but that still doesn't explain why women take on such a heavy workload in the first place. I'm wondering if any of you have noticed the same thing.
As a PhD I disagree; extended schooling is the easy option compared to going out and getting a real job.
A PhD (unless done for immigration reasons) is an incredibly expensive luxury good which you purchase at a time in your life when you can least afford it. You pay hundreds of thousands of dollars in foregone earnings for the right to call yourself "doctor" and spend your twenties obsessing over your favourite subject. It's something that you're much more likely to do if you have a "fallback position" of marrying a rich dude than if you are expected to be your household's primary breadwinner. And it's a double whammy for men because mens' attractiveness is highly dependent on their financial position whereas womens' is not. (I had a certain amount of family money so spending my twenties earning fuck-all wasn't a big problem for me, but most men are not so lucky.)
I think this depends very much on what kind of a Ph.D. you're talking about. In engineering, at least the aerospace variety, I'm pretty sure the sweet spot is an MS, but I think the marginal difference between a Ph.D. on the one hand and a two-year head start on the other is pretty small.
In the hard sciences, a Ph.D. is a prerequisite to most of the really good jobs, and those jobs are really good. Telling someone they shouldn't be a biochemist because biochemists have to spend three more years in school than engineers is kind of missing the point - at that stage, you're trading modest differences in dollar-maximization against getting paid to do what you actually want to do.
A Ph.D. in Music Theory, is either a lottery ticket or a luxury good.
I don't really think your take is correct. Yes, you're forgoing some earnings in the short term, but lifetime earnings for people with doctorates seem noticeably higher on-average than those for people with a bachelors or masters degree. https://www2.ed.gov/policy/highered/reg/hearulemaking/2011/collegepayoff.pdf (PDF warning). Moreover, I think the characterization of a PHD as "obsessing over your favorite subject" understates the extent to which getting a PHD is hard work, and the extent to which PHD students are more likely to have depression than working professionals https://www.nature.com/articles/s41599-021-00983-8#Sec16.
Once upon a time, bean would have posted something like this as a series of comments on slatestarcodex, but now that navalgazing is all grown up I feel like it's worth pointing out the existence of https://www.navalgazing.net/Early-Lessons-from-the-War-in-Ukraine
The first part of this BBC podcast covers some of the same issues: https://www.bbc.co.uk/sounds/play/m0015f1k . Quote: "The part of the Russian army that is modern isn't large, and the part that is large isn't modern". It suggests that troops on exercises often report equipment as working when it isn't, to avoid hassle, and that some equipment is sold on the black market. So the less professional part of the army is badly equipped and that isn't apparent to the senior commanders who are told everthing is working fine.
It's an intellectual and charitable movement that tries to find the ways to do the most good. So where most people when donating money think about what sorts of causes are closest to them or something like that, an EA tries to think about where their dollar might be put to the best use to promote overall wellbeing. Everything else is downstream from that way of thinking.
Right. I see. I've a hypothesis that most charity and altruism is affective rather than effective. That's not a criticism of this movement especially and perhaps I'm being too cynical after 20 years studying human nature.
You can't say that in general. It's entirely dependnent on your utility function. You need to qualify it with the appropriate standard by which you judge effectiveness. I personally don't believe saving as many people's lives in Africa as possible is optimal for promiting the long-term wellbeing of humanity. Spending the money on contraceptives for Africa would likely be much more effective by my standards.
Under vast majority of sane and nonevil utility functions "one dollar for malaria nets helps way more than one dollar for the Make A Wish foundation" is true.
Would you consider a utility function that includes a high "discount ratio" depending on your "relationship distance" to the other person (e.g. caring about your children much more than your neighbors, and caring about your neighbors much more than someone you'll never encounter) as insane or evil?
Honestly, even if you have an inverse distance squared law for utility, you would have to discount Africans down to the level of, like, slime mold, before mosquito nets didn't outweigh random crap
That is why the AMF is so highly ranked, because charity is like, the least efficient market possible, you can buy saved lives for almost nothing compared to far more popular 'help people' charities like cancer research or disaster relief
But yes, we would tend to think of those preferences as being typical human nonsense which we should all strive to overcome. I don't know if I would blame you for failing to do so, it is *really hard* to make yourself care about some random african kid as much a you care about your own children. But I think I would blame you if you failed to recognize that you ought to, even if you can't.
The way out of feeling horribly guilty is, the more we pour money into actually effective charity kind of the more efficient the market will get, until it is impossible to save lives for a few dollars because everybody already has mosquito nets. At that point, you'll finally be allowed to spend $100 on new clothes without worrying that you should have saved a life instead, because you simply won't be able to
Well, the utility function is not up for grabs, it is what it is, there is no ought - if according to the current utility function A is better than B, and according to some other utility function B is better than A, than I should *not* strive to adopt that other utility function because it will cause B to happen and (according to the current one) A is better.
To use your example, if I (or my neighbor) would literally care about some random african kid as much as I care about my own children, and act accordingly e.g. with respect to resource allocation, I would consider that I (or my neighbor doing the same) am a bad parent that is grossly failing in my parental duties of care (which require, at a bare minimum, that I guarantee my kids orders of magnitude more resources than the African median - if my kids got that level of resources, that would be literally criminal neglect that would and should result in my community taking away my kids to raise them better) that act would be grossly immoral. Like, it's great to help others, and saving a life with some of your extra money is a great use of that money, but if you're helping others so much that your kids get just as small share as a random African kids, that's far too much and not okay anymore, that's taking away from your kids what they deserve. In essence, helping others is great with your "free spending" resources, but you're not morally entitled to unilaterally choose to give away everything to outsiders because your family and community do deserve more support from you - you are a part of those communities, you have mutual obligations that you don't have for the rest of the world. Like, if my neighbor had a utility function like you describe, they would be a good person but a so-so neighbor. I would want to be able to rely on them that they will, as a neighbor, favor my interests over a random person - for example, if a violent conflict arrives (as the recent Ukraine war), can I rely on them that they will take my side? If they would explicitly try to stay neutral and say that both sides deserve equal care, I would consider that behavior as immoral, shirking their moral duty to the community.
Far from feeling horribly guilty and striving to overcome that, I consider that it is my ethical and moral duty to ensure support for my dependents, and it is my ethical and moral duty to my family and community to continue caring about them at some rate that's higher than "equally divided 1-in-8-billion share of caring", and if I adopted such a globally-equal utility function then that would be essentially defection in a cooperative game, an immoral, antisocial activity that should rightly result in reprisal and shunning from my community and extended family - in essence, exclusion from the tribe because I choose to abandon my tribe(s) in favor of others. The utility function is not up for grabs - for most utility functions, changing your utility function has very poor utility.
A while back, I started working on a Wikipedia page covering David Benatar's "The Human Predicament," a work arguing strongly against bringing more humans into existence. You can read the draft here (I'm around halfway through the synopsis section so far): https://en.wikipedia.org/wiki/Draft:The_Human_Predicament
Would publishing such an article likely be overall net helpful or harmful?
EDIT: Assume this will be published on a private blog, with the actual Wikipedia article either significantly pared down from the current draft, or significantly more external sources added.
I'm not sure if Yitz is eligible to submit a review of ''The Human Predicament'' to Scott's book review contest now, but this is the type of book I would love to read about there.
I agree that there's too much of the primary source. However, if it passes on notability, I would welcome it as a wikipedia page - it's approximately 7 times better than the average Wiki page and it's not even finished yet. I like it.
Thanks! Alex Power is correct above however, in the sense that anything that I can't pin down as being supported by Wikipedia policy is likely to get removed, and get removed fast. The "Summary" section is the exception here, as I believe it's explicitly stated that you can provide a short summery of a book without citing sources. A longer summery is more tricky, and if I want the info to stick on-wiki, it will need to be backed up by others writing about the book.
Anyway, all of this is a bit of a distraction from my original intention in mentioning this here, which is to question if providing more visibility to (and more steelman arguments for) Benatar's philosophy is likely to be helpful or harmful.
Seven times longer, perhaps: there are many many articles on villages, 19th century sportsmen, etc. that are quite short.
If you cut the "content" and "summary" section it might pass WP:AFC once somebody fills in the "[add section about how this book expands on that]" placeholder.
I'll likely move it there, as I started writing this before I had a good grasp of Wikipedia policy, and re-reading it you're correct, it's sourced too heavily on itself.
Don’t know if you’ve finished this yet, Freddie but in chapter 10 - ‘Innovation’ - there is a reference to the NLP lexical analysis of Iris Murdoch’s “Jackson’s Dilemma”. I know some people thought she was showing cognitive decline with that one, but it is still my favorite IM novel.
I’ve always had a soft spot for a good joke. From the Thomas Insel - who is not involuntarily selibate, BTW - book that Freddie is taking about:
‘There is an old joke about the impact of psychiatric treatments. A cardiologist and a psychiatrist are kidnapped. The captors explain that they will shoot one of the victims and release the one who has done the most for humanity. The cardiologist explains that his field has developed many new drugs and procedures, preventing millions of heart attacks and saving millions of lives. “And you?” the kidnappers ask the psychiatrist. “Well, the thing is,” he begins, “the brain is really complicated. It’s the most complicated organ in the body.” The cardiologist interrupts, “Oh no, I can’t listen to this again. Just shoot me now.”’
The question from the audience that he related in his February Atlantic article has me interested:
‘ When the Q&A period began, he jumped to the microphone. “You really don’t get it,” he said. “My 23-year-old son has schizophrenia. He has been hospitalized five times, made three suicide attempts, and now he is homeless. Our house is on fire and you are talking about the chemistry of the paint.” As I stood there somewhat dumbstruck, he asked, “What are you doing to put out this fire?”’
> The effective altruist movement is offering $100,000 prizes to each of the top five new EA-aligned blogs this year. If you were thinking of writing a blog that touches on EA topics (x-risk, progress, global development, moral philosophy, AI, etc) now’s a pretty good time.
Sorry, is this for pre-existing blogs (started in the last twelve months) or new blogs that will be judged at some later date this year? The rules are unclear. They hope to award in 2022. They haven't said, for example, if I started a blog today if I'd be ineligible because they're judging on the trailing twelve months. In fact, without your comment I'd assume they were only looking for pre-existing blogs.
This honestly strikes me as incredibly weird. You have this thing where you want to encourage writing of a certain type to promote (X) where X is a movement, or a moral system, or whatever. And for your pool you have every blogger that exists, which presumably includes a bunch of people who are doing pretty good work but languishing in obscurity.
But the priority goes to 3-monthers, which seems counterproductive to me. You have a smaller body of work to judge them by, you have no idea how much stick-to-it they have, etc. It just seems like a bad call. I think if I had to steelman it, it would be something like:
1. A lot of blogs start and fail in X months *because* they get no encouragement; it's a lot harder to imagine a blog not making it to a year if they have a 100k obligation.
2. Anybody who has been doing good work for 14 months without reward is probably likely to keep doing it even if we don't pay them - we might not get *quite as much* as if we paid those guys but we can get most of what we would have got, for free.
3. Saying "best new blog" is a lot more compelling than having to explain that some people haven't had "big exposure event" luck in the first 12 months and aren't doing so well.
Which is fine, but you feel bad for people who are doing good work but haven't had great luck in the first 15 months or whatever. I've had a lot of that luck and have a lot of unearned success as a result, and there but for the grace of god go I.
I think their goal is to encourage people to start new blogs, and the "three months" thing is only in there so people who started one day before their contest don't feel cheated.
As a longer-term EA blogger, I feel like the movement has been very nice to me and made it clear I can access their money if I need it. I hope other longer-run EAish bloggers have had similar experiences.
The "to get more people to make completely new blogs" motivation make a lot of sense to me. My brain was de-emphasizing that quite a bit for whatever reason, which is weird on my part. That part is a lot less weird to me than "9 month old blog make sense for this, 13 month old blog doesn't".
But your experience/impression they would be helpful for an older blog sortof negates/superceeds my "vague impressions of weirdness" anyhow, so my comment is probably mooted on all fronts.
The rules say "Qualifying blogs will generally need to be new or started within the last 12 months, though exceptions could be made for special cases (like a long inactive blog). Please reach out if you have questions."
I'll reach out. I guess it's just if they said something like, "Judging will start in November" vs "Judging will start tomorrow" those are very different competitions! The former is obviously trying to get people to start new blogs. The latter is more like a "best newcomer" award.
ETA: Even the last twelve months comment doesn't say twelve months from what. Today?
“ Steven Ehrbar gives a theory I’d never heard before for why US invaded Iraq: to unpin US garrisons in Saudi Arabia.” It’s hard to believe that the Bush Administration was that competent. If they had been, would they have ignored that taking Hussein down pretty much guaranteed Iranian hegemony over Iraq?
I remember much Neocon hope in those days that the people of Iran would rise up and demand freedom. Arab Spring and all a decade early. People really thought liberal democracy would just work in Iraq and Afghanistan and all the neighboring countries would want to follow suit.
Maybe, maybe not. If they decided that if their top-line goal wasn't achievable, a second-best outcome was a satellite of Iran, and on that basis decided to go ahead, then they called it exactly right.
To quote the New Yorker from 2003: “One senior British official dryly told Newsweek before the invasion, “Everyone wants to go to Baghdad. Real men want to go to Tehran.”” The idea of Iraq as a satellite of Iran was not what they wanted. They wanted to take the regime in Iran down.
Well, if you'd quoted George Bush himself, or one of his close advisors, as to what the Bush Administration wanted or thought that would have a chance of being persuasive. As it is, from my point of view your second sentence is a non sequitur relative to the first.
Look - it's fine if you don't want to believe what I believe. I remember hearing that line a lot in American media at the time, though. It was pretty clear that Bush and Co. viewed both Iraq and Iran as unfinished business. If you want to believe that Bush was actually pro-Iran, be my guest.
I think it's silly to postulate a "real reason" why the US invaded Iraq.
The invasion of Iraq required building a consensus among many different organs of government, a substantial slice of the politicians, and a substantial slice of the population. Different people were convinced by different arguments.
Back in college, a year or two after we invaded, I did a geology project where I stumbled across a line in some book about oil exploration in the 1950s. Back then we apparently didn’t even bother with wells under a certain size or quality. But in Iraq they were all mapped out, which is the expensive part of the job, and as of the 1990s they hadn’t been exploited. I tried figuring out once or twice since then if that oil was still there, and if it’s not, who took it. But that’s not the easiest question to research and I never had time for it.
Why is it hard to believe that they were competent? I think it's well-established that many of Dubya's malapropisms were campaign tactics, and the criticism of Cheney/Rumsfeld/Wolfowitz etc. was always that they were evil, not that they were stupid.
The dude did completely renege on his completely correct prediction just 9 years later:
"In an April 15, 1994 interview with C-SPAN, Cheney was asked if the U.S.-led Coalition forces should have moved into Baghdad. Cheney replied that occupying and attempting to take over the country would have been a "bad idea" and would have led to a "quagmire", explaining that:
Because if we'd gone to Baghdad we would have been all alone. There wouldn't have been anybody else with us. There would have been a U.S. occupation of Iraq. None of the Arab forces that were willing to fight with us in Kuwait were willing to invade Iraq. Once you got to Iraq and took it over, took down Saddam Hussein's government, then what are you going to put in its place? That's a very volatile part of the world, and if you take down the central government of Iraq, you could very easily end up seeing pieces of Iraq fly off: part of it, the Syrians would like to have to the west, part of it – eastern Iraq – the Iranians would like to claim, they fought over it for eight years. In the north you've got the Kurds, and if the Kurds spin loose and join with the Kurds in Turkey, then you threaten the territorial integrity of Turkey. It's a quagmire if you go that far and try to take over Iraq. The other thing was casualties. Everyone was impressed with the fact we were able to do our job with as few casualties as we had. But for the 146 Americans killed in action, and for their families – it wasn't a cheap war. And the question for the president, in terms of whether or not we went on to Baghdad, took additional casualties in an effort to get Saddam Hussein, was how many additional dead Americans is Saddam worth? Our judgment was, not very many, and I think we got it right."
I had wondered if the simpler explanation is “in an era of accelerating technology growth, there is no way to make sure your military doesn’t fall behind unless you are constantly fighting a war.”
I have often wondered if this is true, and if so, how many people in the US government think about it or plan around it.
Even beyond the battlefield testing and creation of veterans, there are a whole lot of social unity and governmental programs than depend on having a constant stream of veterans of multiple ages. WWII, Korea, and Vietnam provided huge numbers for VFWs and the VA hospitals. Beyond actual organizations, there are also things like veterans parades and other events. School aged children are asked to find a veteran to thank on Memorial Day and Veterans Day.
We had pretty small numbers from the mid 70s until the first Iraq War. Those numbers didn't go back up much until Iraq II and Afghanistan. Now there are lots of veterans for those organizations again, around the time that WWII vets were dying off in large groups, and Vietnam vets were rapidly aging.
War has been a constant around the world; I believe that all the years of recorded history where there hasn't been a war on SOMEWHERE sum to around a century.
I don't think a conspiracy to feed the VA and VFW is needed to explain US military adventurism; it's an even more far-fetched assertion than the usual hobby-horses of "War only exists in the modern era to trick the populace into giving up rights" and "War only exists in the modern era to kill off single men".
Coleman Hughes interviews Ashley Rindberg (specialist in what's wrong with the New York Times) and gives an enthusiastic mention of Scott's Ivermectin article at 50:00
It's a generally a fascinating interview, and finishes with a recommendation of researching the things you care about rather than using media that's pushed toward you.
"What's wrong with the New York Times" is vague and mild, and one might almost think it has something to do with the not-quite-recent difficulty.
No. Now about siding with Hitler, covering up the Holodomor (both because of convenience for business interests), being wrong about Saddam's weapons of mass production, and the amazingly sloppy 1619 project?
The hypothesis is that the NYT will do whatever it takes for money, access, and status.
I still think their science reporting is generally good, but let me know.
That makes sense, I was wondering why it got so little interaction. I don't know what happened but it seems to not be happening now; let me know if it happens again.
"Motte" is from the Old French word meaning "mound". There is a mound of earth at the back of the bailey, and on that mound is where the final defensive structure is built. Rhyme helps with memory, so I wrote you a ditty:
Ironically, "moat" de-confuses it for me. The motte is well-defended. A moat is a type of defense. Therefore, moat ~ motte.
As for bailey, the word just sounds nicer to me than "motte". So the bailey is the pleasant place to be.
If you start by reading about baileys, you'll probably get confused, since baileys are walled off, just like castles. (Baileys are a type of castle.) Better to read about "motte and bailey castles" first. Part of the point of the analogy is that *both* mottes and baileys are defensible; both have walls - it's just that mottes are *more* defensible.
"An object has free will and is thus an agent if it can take decisions that are [] *free* (they can escape determinism) .... "
I know that this is the classical definition of free will but it is not what people seems to mean when they think that a decision is freely made by someone. So it seems a relatively boring semantic issue : if "free will" and "agency" are defined in a way that is different from the most commonly understood meaning of the words, then yes , people do not have this impossible free will. Il you remove the requirement of non determinism, which is odd in the first place and which clash with the most common use of "free" and "agency", then people have free will.
Yes - this is a good point. What is "free will"? It's a couple of English words people use to gesture at a certain set of intuitions and impressions. It's not as if those intuitions and impressions will go away just because some narrow definition of "free will" can be shown to be logically inconsistent.
My opinion too! I would even say that the classical philosophical notion of free will is not only narrow buit does not correspond to these intuitions and impressions (i.e the non determinism criterion does not appear at all in our intuitions and impressions of what acting freely means).
My point, which is by no mean original, was that the classical philosophical definition of free will requires non determinism,, whereas people common understanding of "acting freely" does not. People common understanding of "acting freely" seems to refer to the "position" of the motivation of someone's actions, internal (free action obv.) or external (no free action). There is of course not a qualitative distinction between free and non free actions, rather it is a question of degree.
If you do not like this definition, it can be noted that what is people understanding of free will is an empirical question. In France a few month ago, a youtuber who popularizes philosophy constructed a survey with many scenarios inluding varying amount of determinism and control, and then asked his viewers to say whether or not in these scenarios people acted freely. And the answer was clear: even when the scenarios are very clearly (emphasis on very!) deterministic, a large majority of people did consider that free action was present when the person in the scenario choosed to act based on internal motivations.
So for me, the whole debate can be summed up as :
- The usual, common meaning of "free will" is to be able to act without external obstacles. By this definition , people do have free will, which is coherent with the fact that people usually feel that they can act freely (sometimes!).
- If free will is defined in a non intuitive and non autocoherent way requiring non determinism, then this free will does not exist. People are upset when you say so because they feel that they do have free will, in the common meaning of the terms.
So philosophers introduced a non-intuitive, incoherent notion, called that free will and then said to everyone that because this strange notion did in fact not exist, then people could never act freely, even when they thought that they acted freely. It seems to me a really worthless philosphical notion, be it sure can cause endless discussions!
Philosophy doesn't have a single notion of free will. You are probably objecting to libertarian free will. Libertarian free will has not been *shown* to be incoherent here, although a number of people have *said* it's incoherent.
Yes, I was commenting on the OP definition of free will, which is I think the most common one and also the Libertarian one (if I remember correctly). I do not think that this notion is incoherent in the sense that it is autocontradictory I think it is incoherent is the sense that is not coherent with the usual, common langage meaning of "free will". May be incoherent was not the correct word, non native english speaker here, as you probably already have guessed.
I think that it is important to distinguish who is using the definition. Many philosophers do use the definition that you indicated, the one requiring non determinism.
But when lay people discuss the notion of acting or not freely (which is of course tied to the notion of moral responsability, retributive justice etc.) I think that the intuitive concept they use is usually the internel/external origin of the motivation for the action. BUT because this is an intuitive concept, there is no formal definition. People do not reason to determine whether this or that action was free, they feel that it was free or not. And yes, internal/external is partly arbitrary. Somes cases are clear cut in one direction or the other, some other cases not so much. This is not math!
I've been thinking recently about a definition of "free will" involving response to incentives. An agent can be said to have free will whether or not to do X if you could affect its decision using (somewhat arbitrarily defined) incentives.
Examples:
1. If you pay me a million dollars, I'll hold my breath for a minute, thus I have free will in choosing whether or not to do that. But I can't stop my heart beating for ten seconds, or stop myself from blinking for an hour -- those things are not subject to my free will.
2. Like many people I have a caffeine addiction, but if you pay me a million dollars to never drink a coffee again then I'll manage it, thus I still have free will regarding caffeine. But there are (I suppose) some heroin addicts out there who are simply unable to resist the pull of that next shot no matter how large an incentive they are offered; in a case like this it's fair to say that the heroin addict no longer has free will with regards to heroin.
3. The coin-counting machine clearly doesn't have free will, you can't bribe it to not count coins or threaten it into declaring a penny to be a dollar. (Of course if you don't give it electricity then it won't count coins, but calling that an "incentive" is abusing my vaguely-defined term.)
4. What about animals? Can a crocodile be incentivised not to attack a delicious water buffalo that strays into range? I don't know, which seems reasonable because I don't know whether a crocodile has free will. A dog can be incentivised not to eat the delicious treat balancing on its nose, so it seems fair to say that dogs have some kind of free will.
I'm thinking about this definition not in terms of animals and machines, though, but in terms of humans. People sometimes suggest that a criminal is not responsible for their actions due to certain circumstances of their life, but the way I see it, if a person could have been persuaded _not_ to do something by an incentive then they're fully responsible for what they did.
There isn't any point in handing out rewards and punishments to entities which do not respond to rewards and punishments, ie which do not have a reward function , or utility function, or desires, preferences, etc. So Basil Fawlty thrashing his car for not starting is silly. That pretty much gives you the compatibilist theory of free will. Compatibilism is able to say what kind of entity has potential free will -- an entity with desires -- and when free will is removed in actuality-- when it is unable to act on it's desires.
But compatibilism uses a narrower criterion than the the theory that free will is just decision making. That theory is unable to cash out the meaning of the "free" at all.
A computer can make decisions, but it can't be bribed.
Unless of course you program it to accept bribes. Honestly, I'm trying to come up with a definition of free will that matches the subjective experience of being human, not necessarily one that is robust to every adversarial edge case.
This seems like a helpful way of thinking about it. These sorts of discussions don’t often rise above “everything is predestined, no one can properly be praised or blamed for anything” vs. “that’s obviously wrong, therefore free will is real.” It’s actually more of a spectrum, isn’t it? The more compulsion or duress or constraints one is under, the less free one is, right?
I think this framework is much more useful than the usual confusion between methaphisics, decisionmaking and ethics. That said, there are obvious problems with it, especially while using it as a strict rule for moral judgement.
Imagine an extremely poor fellow who's stolen some bread to survive. Would he do it if he was offered one million dollars as a counter offer? Of course not. And thus he is guilty and doesn't deserve any indulgence.
On the other hand imagine a corrupt politician who steals millions of dollars of tax money all the time. Will one offer of a million dollars change his mind and make him not corrupt anymore? Of course not. And thus he had no freedom in the matter and isn't condemnable for the corruption.
Sure! But it hightlights the inherit problem of the framework - the relativity of incentives and how to deal with it. Is this person unfree and thus not morally responsible or just wasn't presented huge enough incentive? Are people who need larger incentives less free than people who need smaller ones? I think in a common sense, extremely poor person is less free than extremely rich one, but this framework lead to the opposite conclusion. What about cases when people sacrifice everything for the sake of a greater goal? Are they the most unfree people possible?
What do you mean by "incentives don't work in his case" and by "nor are they a deterrent to other serial killers"?
Obviously if serial killer is put in jail they won't be able to kill anybody else. Also the world where serial killers are left unpunished, will have more killers, all things being equal.
"an agent must be able to take decisions distinctly from the two forms of mechanism-decisions above"
Since you have defined both of them as extremes, there's a compromise available.
"If the decision process can escape determinism and give multiple different answers to the same question with the exact same initial conditions, then it's not meaningful. If the decision process will rigorously and logically give the same answer to the same question in the same initial conditions, then it's not free."
A partly determined, partly.random.event is still.partly determined so not entirely meaningless. It's not wholly free either, but complete freedom is unrealistic.
What is your true objection? are you saying that a mixture of determinism and randomness is *inconceivable* -- or just that it doesn't exist?
" unless the computer is literally relying on quantic phenomena (but as I noted, even those are not above suspicion)."
It's true that we can't completely rule in fundamental randomness emerging from the quantum level, but that doesn't mean the burden of proof is entirely on the indeterminist -- that would be a selective demand for rigour.
Causality and indeterminism aren't hopelessly incompatible. The less of the one you have, the more of the other you automatically have. Pure randomness would provide not basis for science, since anything would be equally possible, and even statistical laws would be impossible. Bur statistical laws, at least, can be found in all areas of science including quantum mechanics. QM allows exactly even probabilities between a very constrained set of possible observations such as "spin up" and "spin down".
It also has "spectral" operators, where there any value can be observed with non-zero probability, but there is nonetheless an "expected" value that is more likely than any other.
It gives a nice intuitive sense to what it means to say an agent "could have done otherwise", though formalizing it is difficult, bordering on impossible. You might choose this as your definition of free will, it seems more useful than a logically incoherent one.
> I think a further problem is that when people express a moral judgement on the basis of a "could have", it's not an epistemic "could have"; they're not saying "it was within the realm of probabilities that you could have made a different choice, but my prediction was wrong, oh well". It's a metaphysical "could have"
It's still an epistemic could have. We are just talking not about epistemic uncertanity of an observer over a person making a decision, but about the epistemic uncertanity of the person who is making the decision. When you are making a decision you don't know yet what you will chooose. You can frame it as deciding whether you are a wicked person or not. But i don't think that such attribution errors are helpful.
When you know what you will do, you can't choose - you'll just do what you will. You just follow the script of doing the thing.
When you don't know what you will do, you have to choose. And this choice determine the future. After the choice is made you may call the outcome inevitable if you feel like it, but in the process of choosing it's just one of the options due to your epistemic uncertanity. And that what matters for morality.
And of course you preferences affect your choices. Your preferences are part of you, and its you who make the choice.
"Probabilities aren't properties of systems, they're a measurement of our uncertainty about the system; but the system itself is still 100% deterministic "
That is not a fact. Whether there is fundamental indeterminism is a currently unsolved question. (Of course, there is subjective, knightian, uncertainty as well).
I think it might be better stated as that we are not entirely sure what the definition of "determinism" is. It can't be "if you know everything, you can predict anything" because that's a truism: if you know everything you know everything, and there is no "prediction" in the usual sense of the word.
So it has to be something like "if you know x% of everything, with x < 100, then you can predict y% of everything, with y > x." But is y = 100 even conceptually? If it isn't, does that mean what is not predictable isn't a fact, or is it just inaccessible, and if it is, is that inaccessibility fundamental or practical? Et cetera.
While a lot of these questions are actually pretty answerable practically, meaning in any way they affect human lives, they still seem to be largely undecided (if not undecidable for lack of good definitions) philosophically.
"if you know everything, you can predict anything"
It can mean "if you know everything about the past, and you know exactly how past states evolve into future states, you can predict everything about the future". And does mean that.
“If the decision process can escape determinism and give multiple different answers to the same question with the exact same initial conditions, then it's not meaningful.”
I’m sure you didn’t notice while setting up the hypo, but this premise is begging the question (you accidentally smuggled in the idea that anything which is not fully determined is random), so I don’t think you can conclude anything from the argument.
Jeez, Godoth, our views of this issue seem pretty compatible. Yet we sort of fucking hated each other by the end of our argument about dealing with covid misinformation published on Substack. I am not being snarky here, more saying life is strange. It is a relief to have a feeling of agreement and respect for you -- hope you're having some version of the same experience as regards me.
You assume that the “weather is deterministic.” But as far as we can empirically tell, weather is probabilistic. We cannot demonstrate that weather is deterministic.
Likewise, QM is, empirically, probabilistic. It is certainly not random and certainly not fully determined.
It’s important that you realize that what you’re doing is not making empirical observations, what you’re doing is taking very small measurements and then assuming that what you believe you observe maps to incredibly complex systems that you can neither predict nor understand. You do this because it has worked well enough with much simpler systems, but that doesn’t make it true.
You think that will must be deterministic because you assume that most systems must be deterministic or theoretically random. But will might be probabilistic. Or it might be something else that you don’t grasp yet. Certainly the phenomenal experience of will is that it is neither random nor fully determined.
The better position here is to realize that you have the burden of proof to demonstrate that you can fully understand and predict a probabilistic system like weather or QM. Only then will intellectual rigor allow you to begin dismissing alternative explanations as improbable. Until then, the assumption of determinism is just a neat personal philosophy.
We know more than enough to know that in principle weather is predictable up to the limits that quantum mechanics itself imposes. In physics quantity does not have a quality all of its own, and we can readily infer from what is true about 6 degrees of freedom to what must be equally true about 6 x 10^23.
"My assumption is safe because I further assume that all other systems are qualitatively identical to the limited experimental systems I can fully predict" is not an argument I find convincing in the least.
1. Only shows subjective unpredictability *can* arise from objective determinism, not that it *must*.
2. Tacitly assumes that all unpredictability must be assigned to the objective indeterminism of the system itself, if objective indeterminism is admitted to exist at all. But believers in objective indeterminism do not have to be disbelievers in subjective unpredictability. Indeed, subjective unpredictability based on sheer lack of information is inevitable anyway.
>but presumably, the kind of definition that interests most people is one that allows to distinguish between agents and mechanisms.
I don't think this is a helpful or interesting framework. By distincting agents and mechanisms in two separate clusters you implicitly smuggle the notion that agent isn't a mechanism. This notion fits our initial spiritualistic intuition, but is heavily contradicted by evidence accumulated by materialistic science. I could rant for a very long time how this is the main source of confusion about free will, but let's choose a better framework instead.
The interesting distinction is between agents and non-agents. Those who have decision making ability and those who don't. A chess program, which evaluates the situation on the desk and then makes the best move according to its utility function is a decision maker. A chess program that just output a specific sequence of moves, unrelated to the situation on the desk and has no utility function is not a decision maker.
"The interesting distinction is between agents and non-agents. Those who have decision making ability and those who don't. A chess program, which evaluates the situation on the desk and then makes the best move according to its utility function is a decision maker."
Do you think it makes sense to blame or punish a chess programme? If not, "decision maker" is the wrong way of cashing out "morally culpable agent".
We literally reward and punush the model while we train it. Not only it makes sense, its essential for the program to work. Also check my reply to Machine Interface below.
The main difference is in the complexity of human values compared to the chess-program values, as well as number of "possible moves" due to human's ability to act outside the chessboard. But it's just a quantative differance. Bellow you asked specifically about morality so lets construct an example of an ethically responsible chess program.
Suppose we take a decsion-making-chess-program optimized to win games and then try to make it care about other stuff, considered to be in the realm of morality. Let our program have an additional video input chanel allowing it to see the face of the opponent. And let the program reward function represent the emotional satisfaction of the opponent in some way, based on the state of their face. For the simplicity of our example, lets ignore all the issues with Goodheart's law and just assume that it's a valid approximation.
What happens to our new program during its training? Sometimes it'll make a very effective moves, which will lead to opponents being very sad and showing it on their faces. Such moves will be punished. As if the program have moral responsibility. And indeed, eventually the program will learn to value the visible emotional state of the opponents and become "more ethical".
The purpose of this mind experiment wasn't to encompass all possible philosophical stances on morality. It was to show how we can make a machine more ethical through the same process we use to make it effective, just by adding a new dimention. It's a demonstration of the idea that the perseived difference is just quantitative.
I do not equate empathy with morality but the idea of universal morality among all possible minds seems obviously wrong to me. Ethics is just the sphere of shouldness and utility functions can be quite arbitrary.
Well obviously there are many large differences between the two. But you need to ask a question that gets at the difference some might feel is the crucial one. You need to ask whether the 2 situations are different in a certain respect -- *but* ask the question in a way that doesn't smuggle in somebody's preferred answer in the question's underpants.
Well, I think the phrase "we can say" in your question is the element that smuggles in your preferred answer. Because you don't really literally mean *we are able to say such-and-such*, right? Because if what you and I are able to say decides the right answer here, then I'll just say "nope, buddy, chess programs and men are identical as regards freedom of choice" -- and we're done (until you come back at me and say I'm wrong). Seems like what's hiding in your phrase "we can say" is something like the idea that moral judgments of men feel meaningful to us and moral judgments of chess programs feel like nonsense, and that the difference in our feeling about the 2 judgments is indisputable evidence that the men and chess programs differ in some way as regards free will & related matters.
Edit: to add a later thought about subjectively experiencing things as morally judgeable. Of course I feel the same as you do: I experience people, but not chess programs and other inanimate objects and processes, as morally judgeable. Still, there have been some occasions when I have experienced exceptions to that generalization. There have been a few times when I have experienced somebody as being beyond judgment. Sometimes this has happened regarding people I am close to, where I am so moved by what I know they feel that I lose any capacity for judgment about what they do. Sometimes this has happened with writers and thinkers I admire -- I find out that the person was an antisemite or a wife-beater and I think, "I don't care, it's irrelevant." And there have been times when I have experienced inanimate objects as evil -- somebody's cancer, for instance. Yes, you can say calling the tumor evil is just a way of expressing distress -- but what I was experiencing felt like an actual judgment about the tumor's evil nature, not a metaphor.
Anyhow, the point of all this is that even if we used your criterion of "feels judgeable" as a litmus test for whether or not an entity has free will, the test does not work perfectly. And my guess is that as AI becomes more advanced, there will be more situations where a man-made, nonhuman entity feels judgeable to us. In fact, come to think of it, I already experience Facebook as stinky, rotten and evil (and I'm not talking here about the internet shitlords who make decisions about about the damn thing, I'm talking about Facebook itself).
I think it's only definitionally impossible if you set up the question and define the terms the way OP did. Calling will "free" gets a metaphor going in your mind in which action is either "determinist" or "free," i.e. either constrained by circumstances that function as jailors; or not constrained, i.e. no jailers present. But this way of framing the question sort of sets things up so that everything that is not random (independent of everything else, and unpredictable) is "jailed" and unfree. So since the coin sorting machine does not give random results, it's "jailed." But what is this jail the coin sorting tray is in? Turns out the "jail" is just the construction of the coin-sorting box itself -- the various slots and holes and ramps that guide each coins into their proper place. But that's a weird way of. framing things . The slots and holes and ramps aren't the jailers that keep the tray from freely choosing where to place each coin.They are parts of the tray -- the structures that implement the trays's choosing process. The sorting tray "chooses freely" where to put each coin. It makes and carries out that choice via a structure of slots, holes and ramps. So now if you up the size and complexity of everything, and start talking about people and their synapses, and do they freely choose how to move their hand or are they "constrained" and "determined" by the laws of physics and chemistry that govern the behavior of brain cells, nerves, muscles, etc., it's still cheating in the same way. The brain cells and nerves and muscles and synapses aren't determinist constraints on the person's choice of how to move their hand -- they are the physical and chemical structures and processes via which the choice is made and implemented.
Well, I'm rejecting some of the terms in the way the problem is stated, including the term "free will." What does that term even mean? Using that term implies all kinds of stuffk, including the idea that there's a contrasting phenomenon, or maybe more than one: Will that's constrained by being in a box -- will that used to move freely among the possibilities choosing some but not others, but now is quadraplegic -- will that gets kind of bossed around, but sometimes gets to do what it wants. None of that makes any sense. (And for that matter, what does "will" really mean?) What I'm saying is that people and the sorting tray are both making choices, but in the case of people the mechanism via which the choices are made is so complex we can't see the choosing process, and feel very tempted to think a process of a whole different kind is going on.
How do you separate being rewarded as an incentive vs being rewarded on principle? As I understand it the whole "on principle" thing is just an intuition for an incentive.
By your incredible logic, each beating of our heart is an act of free choice because the physical systems that beat our heart are simply structures and processes by which the choice is implemented.
It's weird that you accuse OP of smuggling things in, when you're done just that, without explaining what it could possible mean for a person to make a choice independent of the mechanistic workings of their brain.
Well, I did use the phrase *choosing freely* in my response, above, but I did that in an effort to make my point of view more comprehensible to OP, to sort of say what I thought in a way that used some of OP's concepts. I have gone back to my post above and put the phrase in quotes, so that sentence now reads "The sorting tray "chooses freely" where to put each coin." In my view, the phrase "choosing freely" is an inhabitant of OP's way of framing the question, and, like OP's framing as a whole, is sort of incoherent and self-evacuating. What does "choosing freely" mean, exactly? Phrase implies that there's an opposite to choosing freely. What the hell would a meaningful opposite to choosing freely? Sure, you can have a guy with a gun to your head saying he'll kill you if you don't take your hat off, but then if you take your hat off then you're choosing to take it off to avoid the bullet, so you're still "choosing freely". Can you think of a meaningful opposite to free choice, and explain how lack of freedom would come about?
Seems to me that the phrase "choosing freely" is sort of like "wet water." WTF is wet water, as distinct from plain water? What's the opposite -- dry water?
'What the hell would a meaningful opposite to choosing freely? Sure, you can have a guy with a gun to your head saying he'll kill you if you don't take your hat off, but then if you take your hat off then you're choosing to take it off to avoid the bullet, so you're still "choosing freely".'
You're not choosing freely because you are not acting on the desires you would have acted on otherwise.
So there are at least two opposites to "choosing freely": lacking the basic capacity -- lacking desires , agency; and being under compulsion in a particular situation. (also, you can have compatibilist free will but lack libertarian free will).
You haven't posted on this thread for a while, and may have grown tired of it. I haven't though, so thought I'd throw out one more thing, about the idea of there being situations where there are constraints on someone's choices. Please do chime in if you're still interested. I love this stuff.
Seems to me that all sort of things can be conceptualized as interfering with someone's doing something, and the various things are wildly different in nature, in category, and in the sense in which they prevent someone from doing something. Here's what I mean. Think about the difference in the various "constraints" at play depending on how this sentence ends.
"She was not able to read the page because . . .
. . . he said he’d shoot her if she did.”
. . . she did not have her glasses.”
. . . the words on it were written in a foreign language
. . . she was sure the message there would upset her terribly.”
. . . every time she tried to read it she somehow instead just moved her eyes over the lines of print and understood what they meant.”
I understand your first example of not choosing freely (under compulsion, gun to head) but can you expand on the second one -- "lacking desires, agency" -- and give an example or 2?
But what if the coin-sorting tray is conscious, and is just so damn dumb and limited in its grasp of things that it experiences itself as freely choosing where each one of those coins go? And, if you expand the coin tray & its consciousness and the variety of its choices and complexity of its repertoire enough, you get us. You get me, for instance. I experience myself as freely choosing to write this, but of course with advanced enough tech you could trace all the synaptic firings, etc. that produced these words.
"but of course with advanced enough tech you could trace all the synaptic firings, etc. that produced these words."
So? That.doesnt mean you're not choosing .or freely choosing .. it just means that choosing had moving parts,.and doesn't appear out of nowhere.
According to science, the human brain/body is a complex mechanism made up of organs and tissues which are themselves made of cells which are themselves made of proteins, and so on.
Science does not tell you that you are a ghost in a deterministic machine, trapped inside it and unable to control its operation.: it tells you that you are, for better or worse, the machine itself.
So the scientific question of free will becomes the question of how the machine behaves, whether it has the combination of unpredictability, self direction, self modification and so on, that might characterise free will... depending on how you define free will.
I read it more as saying that just because the program isn't complex enough to understand how itself runs, does not make it any more/less deterministic/free. Just because I can't calculate my own brains decision process does not mean that process isn't deterministic.
Who was it that said “If our brains were simple enough for us to understand them, we’d be so simple that we still couldn’t”? But just because a brain is made of atoms doesn’t mean there’s no such things as brains. Your decisions and actions are surely made of the untraceable concatenation of a zillion deterministic + random micro-events. Yet people still decide, and act - whether freely or under compulsion/duress.
Liked this quote so much that I hunted around to find who said it. Here's what I found:"The earliest evidence known to QI appeared in the 1977 book “The Biological Origin of Human Values” by George Edgin Pugh who was a nuclear physicist and the president of a company called Decision-Science Applications. The statement was used as a chapter epigraph with a footnote that specified an ascription to Emerson M. Pugh who was the father of the author. Both the father and son were physicists, and Emerson was a professor at The Carnegie Institute of Technology:[1]"
Well, the problem of Consciousness gets Hard as a result of someone's framing it in a certain way: "Consciousness is a thing. I know it is, I'm experiencing it right this moment. So are you. You know it's a thing. And yet, and yet -- it's a completely different kind of thing than all others things. It's not matter, it's not energy, you can't see it, you can't smell it, you can't measure it. Yet it controls all kinds of stuff. It controls the hand that is writing these words. " Etc.
The origin framer of the hard problem was David Chalmers, but he did not define consciousness as nonphysical. He did define it as having subjectively accessible properties. And why not...it does. If I am not to refer to my subjective access to my mental states as "consciousnes" , then I need some other term, becausnits still there. There's.a difference between simplifying a problem, and switching the topic to.a different, simpler problem.
Not arguing here - just wondering aloud. Do all languages have a word for consciousness, or something sort of like what we mean by consciousness? As I think this over just now, it does not seem to me that *consciousness* is such a widely useful concept that every language would have to come up with a word to capture it. Seems like any language would have to have a words for non-conscious states -- "asleep," "stuporous," "dead" -- and a word for "self" or "I." But "consciousness"? Maybe not that useful, outside discussions of this kind. Maybe a word whose meaning is pretty hard to convey to somebody who did not grow up with that concept being one of the bricks in the house of common sense.
For what it's worth, I think Spinoza would agree with you. And we can go all the way back to Aristotle in terms of having a cause for every effect (at least until we get to his Unmoved Mover). But assuming absolutely everything is 100% deterministic may be putting a bit more weight on that assumption than it will bear. AFAIK, nobody has refuted Hume's skeptical observations regarding causality and the problem of induction - if past performance is no guarantee of future results, isn't causality itself just a useful guess? And if we have no logically coherent account of causality, then we certainly haven't got one for determinism.
I admit I don't spend much time thinking about this one so these might just be D-student thoughts, but 2 questions immediately come to mind:
- Why would such a simulation be useful in the first place? Like I said, I haven't read much on this so maybe it's obvious and I'm just missing something, but it #3's likelihood hinges on their being some use for "simulations of evolutionary history." I don't know what we would use that for now if we had the capacity to run it; is there a value proposition I'm missing?
- Assuming such simulations did have value, wouldn't they a simulation capable of figuring out that it was a simulation be functionally useless to a posthuman civilization as a data point? In that case the mere fact that we are capable of having this conversation points toward us not being a simulation - or at least that nobody's noticed "aw, shit another one figured it out" yet and deleted our trash dataset to replace us with a clean one.
Here's a possible counterargument I've never heard:
Suppose that you live in a society where energy and computing power are so abundant, and simulation algorithms so advanced, that it's possible to simulate entire societies of humans just for shits and giggles. What do you choose to simulate?
Maybe some weird history geeks would simulate societies in the distant past. But the real benefit is in simulating the present. Simulate your customers, so you'll know what they're willing to spend money on. Simulate your competitors, so you'll know what they're up to. Simulate your enemies, so you can defeat them. And most of all, simulate a billion different versions of yourself so that you know how you'd respond to various scenarios.
(And if you feel bad for all the versions of yourself that you're abusing, simulate a few billion versions that are perfectly happy to make up for it.)
Given that "present" simulations are likely to be much more useful and hence more abundant than "past" simulations, if you live in a simulation you're much more likely to find yourself in the _present_ of a society that can do these sorts of simulations than the distant past. Since we find ourselves in "the past", it is more likely that we are in the real world and that these sorts of simulations will never exist.
I would also expect lots of other simulations to be some weird sex things, or very gamefied experiences. The fact that our world isn't like this is an evidence in favour of not being in a simulation.
I have troubles accepting "thus these sorts of simulations will never exist" conclusion. It requires assuming that we are randomly selected to exist amoug all possible people among all times which seems not to be the case: we have causal link between past, present and future. Ability to deduce that there is no future because it's not now seems like cheating and definetely not how cognition engines produce valid knowledge.
If such anthropics reasoning actually worked it would mean that we can dramatically increase our chances of survival as a species by precommiting never to simulate consciouss beings and strictly controlling our population.
What is the difference between simulating consciousness and creating consciousness? And what is the difference between a posthuman simulator and a God? I think your ethics thought experiment makes this even more explicit, with a benevolent creator that creates a universe of conscious individuals so they can experience the joys of existence. In my faith specifically, the Church of Jesus Christ of Latter-day Saints (aka Mormons), your thought experiment comes surprisingly close to our doctrinal worldview on more than one level.
The obvious difference is that we would have no moral obligations to the posthuman simulators beyond (maybe) what adult children owe their parents. They certainly aren't the fount of morality, and if they tell us to slaughter the Amalekites or whatever we should say no.
Not too much is defined in specific details, unfortunately, although this is a fun area for speculation within the faith and by those who would mock our beliefs. But to put it short, we take Paul more literally than most Christians when he taught that we are “the offspring of God” and “we are the children of God: and if children, then heirs; heirs of God, and joint-heirs with Christ.” 19th century Church leader Lorenzo Snow gave us the couplet “As man now is, God once was: As God now is, man may be,” which is about as much detail as we have but is full of intriguing possibilities. The Church has a nice article on the topic of “becoming like God” here: https://www.churchofjesuschrist.org/study/manual/gospel-topics-essays/becoming-like-god?lang=eng
Obviously one difference between LDS beliefs and your benevolent simulation is the idea of an immortal soul and Heavenly afterlife. Just a small detail, but who can say what posthuman technology will and won’t be capable of. 🙂
If there ancestral simulations which simulate me, than myself in simulation execute the same algorithms as myself in reality. When I make decision in one of the simulation, I also make the same decision in every other and in real world. At this moment the distinction between me in different simulations doesn't make any sense. We are the same entity.
How good does the simulation need to be? I can imagine small changes (doing a thing five minutes later or earlier, for example) which would almost never make a difference.
I believe that other sorts of simulations are going to vastly outnumber ancestor simulations-- artistic simulations, scientific simulations, playful simulations.... I'm not sure how this affects the odds of *our* being in an ancestor simulation, but I think it's less likely.
Agreed. The vast majority of simulations will be for entertainment or some sort of productive purpose (ie: simulating multiple agents to all work on a research/technical problem in parallel).
I imagine that the overlap between "enough of a geek to want to see what the 2020s were like according to our best models" and "has the resources to simulate a solar system filled with ten billion sapients" to be limited to whatever the god/alien/post-human equivalent of universities are.
I consider consciousness to be the act of feeling. Is a computer system, like a modern automobile conscious if that system can feel (diagnose) problems? Perhaps. We are stressed if we feel pain, or some degradation of our physical bodily systems. Does a car feel pain if some system is distressed? Is a car distressed if there is a warning lamp illuminated? Is our distress over-rated.
My distress over the concept that we may be living in a simulation is this. 1) Someone is running the simulation, that would be God, or Gods. 2) A simulation has a purpose, that purpose is to evaluate designs. If we are in a simulation, we are the designs being tested for robustness. If this is the case, there is a higher motive for testing designs for robustness. This leads to the possibility that there is a God, and God is testing the designs—us—to find if we are worthy to proceed to the next step. Which is the basis of Christianity, if we don't meet the goal of salvation, we don't achieve the entrance to the next step (heaven). Or perhaps there is a Buddhist bent, where we all need to fulfill each circuit to achieve Nirvana. Perhaps we need to obey each of the 613 commandments of Judaism, or perhaps Judaic Law is just one circuit in the big wheel.
> My distress over the concept that we may be living in a simulation is this. 1) Someone is running the simulation, that would be God, or Gods. 2) A simulation has a purpose, that purpose is to evaluate designs. If we are in a simulation, we are the designs being tested for robustness.
Our universe might just be a grade school science project. The posterboard behind the universe says "Proof that intelligent life can evolve in a universe with only 3 spacial dimensions and 1 time dimension". This year he got an "A" on his project because of the added constraint "With mostly complete conservation of energy + entropy".
Everyone was forced to make a 6+3 universe in kindergarten.
One of the smarter people I know holds that the mis-match between classical and quantum physics is proof of the simulation, precisely because it appears to be the work of different authors with wildly different skill levels.
Basically: we're a group-work project, and the being responsible for quantum physics slapped their part together at the last minute.
Computer? Planet? Power? That's one of the silly parts of the simulation hypothesis - "this place is fake, therefore I know all about the true real world". You wouldn't even know if time, energy, and place are real.
Anyway, the science project wasn't a simulation, he made a real big bang in a test tube, then let it age for 14 billion years. It was a rush job, since the science fair was the next day.
Unless I'm reading it wrong, Bostrom seems to assume by fiat that the simulation hypothesis could only involve posthumans simulating virtual versions of their past. …Why? Sure, posthumans, or other beings recognizable as "humans", running a sort of high-resolution Sims game is a *plausible* answer to "who would simulate the universe", but it doesn't seem like the only one. I, for one, often default to the supposition that we're probably being simulated by extremely strange "aliens", and that it's quite possible that our laws of physics don't look very much like the rules governing the "real" world (or, if you prefer, the world one level up from ours in the infinite recursion of simulations).
I don't think this is an assumption of Bostrom above. If there are other plausible sources of simulation, that only makes (3) more probable. For you to be a likely simulation, it's only necessary that there be *one* likely source of other simulators - adding more doesn't weaken the case.
He didn't call it, just accurately described the likely consequences of a hypothetical one. The idea of it is described as espoused by bloodthirsty and clueless political observers, which is darkly amusing in retrospect.
Yes. Looks like he did not expect an invasion because he knew it would be such a bad idea: <<In general, there will be no Ukrainian blitzkrieg. The statements of some experts such as “The Russian army will defeat most of the units of the Armed Forces of Ukraine in 30-40 minutes”, “Russia is able to defeat Ukraine in 10 minutes in the event of a full-scale war”, “Russia will defeat Ukraine in eight minutes” have no serious grounds.
And finally, the most important thing. An armed conflict with Ukraine is currently fundamentally not in Russia's national interests. Therefore, it is best for some overexcited Russian experts to forget about their hatred fantasies. And in order to prevent further reputational losses, never remember again.>>
If he thought Putin would actually launch a blitzkreig (which is what happened—edit: not actually a blitzkreig but something resembling one), I think he would have have struck a different tone toward the hawks.
Why isn't there a political movement against the tactics of the IRS?
Libertarianism?
I mean specifically their dirty tactics not their existence or income taxes in general.
I don't understand pedestrian crossing lights. Why do you have to push a button to make them work? As far as I can tell, it doesn't extend your crossing time, which makes me think it's about conserving energy: why make the pedestrian crossing lights work when there are no pedestrians? Except that, when there are no pedestrians, the light stays "Don't Walk". Does that somehow conserve energy?
Pushing a pedestrian crossing button to get the sign to work seems stupid, but I'm going to take Chesterton's advice and assume there must be or have been a good reason for them. But what is it?
Many intersections only rarely have pedestrians, and halting traffic for 30-40 seconds every few minutes to let imaginary pedestrians cross is inefficient. Same reason some intersections with little vehicular cross-traffic will default to the main route staying green until a sensor detects a car on the crossing road, except it's harder to make a reliable automated sensor for a thing that isn't a metric ton of steel.
Well in the Texas town where I live, the pedestrian WALK sign doesn't halt vehicular traffic. It just turns the WALK sign on when the parallel traffic has a green light. So it doesn't protect you from cars turning right on a red light or oncoming traffic turning left. As a result, pedestrians get hit all the time by traffic while crossing with the WALK sign on. I, myself, jaywalk whenever possible, because it is much safer than trusting that a car won't turn into you at a corner.
It's plausible the WALK sign makes the green light in your direction an imperceptible-to-humans fraction of a second longer, but, being human, I can't detect if that's the case.
Most - but definitely not all - drivers in Saint Paul will yield to a pedestrian at a crossing. It gets weird at times. I have to cross a busy street without a light on my way to and from the corner grocery store. If I so much as glance to the other side of the street when I’m at a crosswalk, cars will slow to a stop for me.
I’m fine with waiting for a break in traffic but some hyper polite drivers won’t even respond to a waved arm ‘Go ahead I’m not in any hurry’ gesture.
https://www.minnpost.com/cityscape/2016/03/st-paul-launches-effort-change-citys-driving-culture-enforcing-crosswalk-laws/
Wow, Minnesota niceness. In Boston we either run over pedestrians crossing without a light, or at least almost do to teach them not to fuck with us.
In my little town, I think it activates the audible - and obnoxiously aggressive - “WAIT WAIT, WAIT” for the vision impaired.
Pete Davidson canceled his trip into space? I don’t think he realizes how a thing that could make him more interesting to women. Why his romantic life would probably just take right off. No more lonely nights with something like that on his CV!
Zvi in his most recent Covidpost mentioned that he couldn't find a video because it was pulled from YouTube. I found the video by putting the YouTube URL into the Wayback Machine - I didn't actually think the Wayback Machine worked on videos, but apparently it does.
http://web.archive.org/web/20220106042900/https://www.youtube.com/watch?v=B8knn6U5Igs
Posted in case anyone here read said Covidpost and is interested in the video, or in case someone here can get it to Zvi/is Zvi (I don't know any way to tell Zvi things other than making a commenter account on one of his blogs), or in case anyone here doesn't know that the Wayback Machine works on YouTube (which is a pretty big deal considering how much stuff YouTube burns).
As a guy who is getting old and no longer in the market, thought I would offer some dating advice since I see guys here sometimes asking for it.
One thing I learned over the course of decades is that women reject you for two main reasons: because you are too in a hurry to get laid or because you are too in a hurry to have a serious relationship. It's easy to mistake a type 1 rejection for a type 2 rejection and vice versa. Maybe that seems obvious, but it didn't seem obvious to me when i was younger so I doubt it seems obvious to every young guy reading this.
An important take-away, I think, is to realize that when a women rejects you it is often for the opposite reason that you imagine. You could learn this from reading Proust, but I'm going to try to keep this shorter than Proust did. (Proust may have been gay, but he understood romantic relationships and sex better than most, straight gay or otherwise.)
If you think a woman rejects you because you aren't attractive enough, that probably isn't the reason. It's more likely because you seem either too interested in getting laid or too interesting in having a long term relationship. Meaning, if you get rejected often, you should change your strategy. If you are trying to get laid, don't. Work on signaling that you are interested in a relationship. OTOH, if you are mainly interested in a relationship, don't. Just try to get laid.
Either way, women are going to figure out what you are really interested in pretty quickly, so don't worry about sending the wrong signals. Do everything you can to counter-signal, because that will send a more balanced signal in the short run. Sending a balanced signal is what most women find attractive.
EDIT: And don't make the mistake of thinking "but THIS WOMAN I am interested in isn't MOST WOMEN". You can't read minds.
> If you think a woman rejects you because you aren't attractive enough, that probably isn't the reason.
Uh, yes it is.
I really don't think it is. While men rate women 1-10, women are mjch more pass-fail regarding men. And there's a low bar to passing: don't be smaller than the woman, smell OK. Even those criteria will be waived if you're funny as fuck.
It also helps to be/appear deeply interested it something other than sex or relationship. Something she can relate to, e.g. some hobby you have in common.
Today's post-secondary institutions are expensive and backward. How can we do better?
Idea for accreditation system based on accrediting students rather than institutions:
1. An accreditation company (nonprofit foundation? PBC?) produces standardized tests to measure student knowledge & abilities. Tests are broken down into a set of mini-tests, and each mini-test tests a small amount of knowledge and ability. The company charges money to an institution whose students take a test, or to a person who takes a test independently, and revenues are used to produce more tests (and to prepare defenses against cheating). Students earn diplomas from the company according to some set of rules to be determined. Much like TripleByte, the accreditation company earns reputation by verifying ability correctly, and it can increase prices as its reputation increases (but if it's a nonprofit or PBC, prices should hopefully not rise without limit.) IMO students should take tests on the same topic twice, at least 8 months apart (to verify knowledge retention and discourage cram-based learning), but that's not my call to make.
2. Educational institutions (e.g. MOOCs) teach students, who pay tuition as usual. The institution chooses what (and how) to teach in each of its courses, and at regular intervals, offers a test from the accreditation company. A test will typically be composed of two to twenty mini-tests chosen according to the material that was taught in the course. The institution earns revenue equal to the difference between tuition fees and test costs. The accreditation company tracks which mini-tests each student has passed; if a student moves between institutions, the gaps and overlaps between courses at the two institutions are tracked exactly. And of course, some people (Sal Kahn?) will offer completely free courses.
This type of system bypasses traditional accreditation boards run by encumbent institutions, which have an incentive to avoid accrediting new entrants. Thus it should produce a competitive online market with low prices, while still giving students a meaningful diploma that they can tout to employers.
Now, surely I'm not the first to think of this, so why hasn't this kind of system become popular?
"Now, surely I'm not the first to think of this, so why hasn't this kind of system become popular?"
They do exist, so I think the question is "why aren't they more popular?" and the short answer is that because if the accreditation is to mean anything and not be a diploma mill, you need some way of checking that the course material is good, the students are qualified, and the diplomas or certificates mean something - and that means you end up re-inventing colleges in some form. E.g. how can you be sure QuadrupleBit graduates are as qualified (this is distinct from capable or clever) as IvyWreathed U graduates? One way is to compare coursework and results. But if both institutions have different forms of assessment and coursework? Well, get QB grads to sit a final exam.
That means you need questions for the exam, a curriculum to cover exam topics, and a place to sit the exam where you can be sure that the QB lot are not all cheating and just have Google open on their computer at home to give them the answers.
Congratulations, you have re-invented the exam hall. And if you need a place to host it, the simplest answer is to get QuadrupleBit, the online certifying company, to hire or rent or find someplace to hold that exam. And since they need this place on a permanent basis and for as many final exams as they're running throughout the year, they may as well have a building for their own use.
And eventually 'online only' becomes 'well we have all these offices and now classroom spaces as well' and it's a new college.
How do you develop a reputation as an accreditation company in the first place?
Universities have history, your QuadrupleByte has nothing. At best you're rated as equivalent to that shitty bootcamp someone's running in the city, whose alumni cannot program their way out of a wet paper bag.
You want is an institution that is a Schelling point for intellectual elites, and does some basic filtering to keep that Schelling point stable. If it is known that clever kids finish CS at $EXPENSIVE_COLLEGE, companies will hire from $EXPENSIVE_COLLEGE. The actual quality of courses at $EXPENSIVE_COLLEGE is an afterthought.
Speaking from my own experience as someone who got a Bachelor's in Computer Science, and then went into Software Engineering - conceivably the very sort of person you're trying to reach - I think that the big missing element is projects.
I credit a great deal of my success in industry to the project-oriented curriculum at the institution I attended, and particularly the class which brought together groups of fifteen people for seven weeks for the closest I got to an actual Software Engineering experience in school. They introduced me to ideas and practices around teamwork and source control which I suppose I could recite, but which would be very difficult to articulate in ways that could convince an observer that I had gotten them in a brief period of time. And if perhaps I could, the ability to verify in a test environment that I had these skills would, I think, hinge on my communication skills to an undue degree - while communication skills are important in my field, I do think this would put more weight on them than is warranted.
It's quite possible that schools do a poor job of actually verifying that a given student has learned the things that a given project is meant to teach. Perhaps they are simply going off some strong prior - whether it's the latest in education research, the dead reckoning of their most experienced faculty, or the dean's latest interpretation of their star signs (though hopefully accreditation puts some kind of lower bound on how bad that methodology can be) - that tells them that given a project which roughly fulfills these specifications, the students should have learned these skills, and if that's true of only 90% of the students and even those learn on average 90% of the skills, there's the opportunity to instill it with further overlap and maybe we'll catch it before they graduate, if they missed out on a foundational skill in their first year that a fourth-year project depends on in the same way that it depends on the students being functionally literate in the university's language.
But I don't think there's a way to identify the kind of learning we want to happen from projects in an environment where the assessments and the teaching are so decoupled. Tests can be fine for getting a student to apply specific knowledge in a narrow sandbox, but they can't really capture how well a student works at something - over days, over weeks, consulting resources (because consulting resources is a skill, which may well be more important than what most given resources have to teach; as a Software Engineer much of my job relies not on me knowing what needs to be done before but on being able to find, interpret, and apply the relevant sources).
I think this gets much worse for fields where there is very definitely not a singular concise correct answer at the end of an equation or as the result of a suite of automated tests. Which is a shame, because one big thing we want from knowledge workers is the ability to work somewhat autonomously, on broad problems with ill-defined endpoints, often collaboratively, and in novel situations. The stuff that doesn't fit this description - well, that'll get snapped up by automation sooner than later.
You'd be turning universities into teach-to-the-test cram schools.
Most of what you'd expect to learn at the university level can't be adequately tested in an examination, and for the stuff that _is_, it would be reasonably easy to cram (e.g. there's only, like, twelve possible classes of exam question for Special Relativity so let's just study them all).
Since universities already have accreditation, I would expect them to reject a new system completely.
But when I was going to my university Engineering program, I crammed a lot because my teachers were so bad that I felt that *actually understanding* the course material was out of reach for me. This is completely different than my high-school experience, in which I *never* crammed.
Uncharitable of you to ignore the anti-cramming measure that I proposed. Cramming is a short-term trick; if you have two tests spaced >8 months apart, you are certainly better off learning the material properly.
No, you just cram twice.
It is impossible to "learn the material" for the long term if it has no day-to-day practical use. The material would need to be integrated into practical projects executed during that 8 month interval.
Not sure why you think cramming twice is "better" than learning the material properly. I disagree about the impossibility of learning things without day-to-day practical use; I learned such things throughout K-12 without cramming.
Current education/accreditation system is a mess of all kinds of signals. It is difficult to improve, because sometimes it is supposed to suck.
For example, imagine that you create a parallel educational system that is neither better nor worse than the traditional one, but it is 10x cheaper. Would it be popular? No, because using the new system would signal that you are poor, and so are most people you know. Rich people would avoid it; and they would also have an incentive to pretend that the new system is worse.
Or imagine a new educational system that is just as good, only 10x less frustrating for students. Then employers would avoid hiring people from that system, because they would suspect that such employees would have low frustration tolerance and would soon quit after the normal amount of abuse at workplace.
Or a new system where kids learn more easily because somehow all lessons are magically easy to understand? The employers would suspect that the kids are actually less smart then they seem, and will fail when facing a novel situation.
In other words, trying to make education better is like trying to make a marathon shorter -- the people who usually run marathons will reject the idea. Education is inefficient, frustrating, and gives unfair advantage to rich people... which is exactly the point. It prepares you perfectly for your future workplace.
(If you are interested in knowledge for knowledge's sake, then of course, Khan Academy is the way.)
> No, because using the new system would signal that you are poor
I'm pretty sure most employers aren't judging you based on how rich your parents are, with some obvious exceptions (Harvard) in which wealth isn't the only thing being signaled.
> employers would avoid hiring people from that system, because they would suspect that such employees would have low frustration tolerance
As a (very) small-fry CTO myself, I want knowledge and skill and more than I want frustration tolerance. I guess some companies want that, but other companies want other things. (and why would poor people have worse frustration tolerance?)
"(and why would poor people have worse frustration tolerance?)"
It's not that poor people would have worse frustration tolerance, it's that (whether rich or poor) coming out of an education system where learning was effortless, the teachers wee excellent, it was easy to learn, everything was ready for you when you were ready to take the next step, etc. is not good preparation for a workplace where it's "dunno the answer to that, the guy who does know the answer is out for two weeks so you have to wait that long, there is paperwork with the answer on it someplace but you'll have to search for it and nobody knows where to start, and there isn't a clear-cut answer at the end, just a 'good enough' and besides the boss/boss's boss/client is going to change their mind three times about what they want anyway".
The elephant in the room is that if your parents are rich they're probably higher in IQ and/or cultural capital, which makes you higher in IQ and/or cultural capital, both of which are hugely desirable traits in new hires.
(Ah, please take the previous comment with a grain of salt; it was exaggerated for artistic purposes. This comment reflects my beliefs literally.)
There are two ways how wealth impacts knowledge/skills that come to my mind immediately (and there are probably more):
First, rich people can spend more money on education-related expenses, and I think they have more free time on average (no need to keep two jobs; can save some time by spending money on something). So if we imagine two kids with equal intelligence, talent, chatacter traits, and hobbies; learning e.g. computer science in exactly the same classroom with the same teacher using the same curriculum; but one comes from a poor family and the other comes from a rich family, I would still expect different results, given the following:
The rich kid will have a better computer at home, more time to use it, no problem with paying for a course or a tutor if necessary. The poor kid will feel lucky to have a computer at all, will be limited to free resources, and will probably spend some time helping their family (working, taking care of younger siblings). -- Therefore, at the end of the year I would expect the rich kid to know more, on average.
Second, when people think about the quality of school, they usually think about teachers, curriculum, didactic tools, and whatever... but a crucial and often ignored factor is classmates. Kids inspire each other and learn from each other a lot. Or they can prevent each other from learning by disrupting the lessons. Your ability to choose a school with better classmates is often limited by money. With the same curriculum and same quality of teachers, I would expect rich kids to get better outcome at school, simply because they are not surrounded by classmates from dysfunctional families etc.
They differences may seem small, but they add together, and their effects compound. With exactly the same curriculum, I would expect rich kids to get better results, on average, even if we control for intelligence and other traits.
Therefore, from the company perspective, a rich kid seems a better bet, ceteris paribus. (The only disadvantage is that the rich kid will probably expect a higher salary.) Therefore, if you are - or pretend to be - a rich kid, you probably do not want to signal that you attended the school for poor kids... or the cheap school.
Re: frustration tolerance -- you want some basic level of it, like people who won't give up and quit after the first problem.
And speaking as someone who has come out of the 'practical training, non-university, on a ladder of accreditation' route and worked with people who went the university route, there is a difference. Not even so much in practical skills, depending what course they did the university people can be as good and up on those, but the entire shape of the learning experience, the environment, the content.
The practical-oriented really was teaching to the exam, telling you what you needed to know to do the tasks, but nothing extra. It was to get you qualified and out into some kind of paying job as fast as possible. If you wanted extra, you could then go up the ladder to the university. People from the university just had an entirely different experience, and yes they did indeed seem to have a more rounded education, a better understanding of the subject, and just that familiarity with how academia works that is hard to put into words but you know it when you don't have it and are trying to engage on the academic level.
https://www.qqi.ie/what-we-do/qqi-awards/certifying-qqi-awards-provider
https://www.youtube.com/watch?v=qK15HlhDbo4
Hey, you seem to know your shit. Could really use an impartial set of eyes on something...would you be willing to read a few pages and give an opinion?
It seems like this sort of info about a student's mastery and retention of material would be taken seriously by grad programs or employers in STEM fields -- have my doubts about other fields though.
The step you're missing is to make big companies accept your accreditation as a valid measure of a prospective employee's worth. Companies are often at least as interested in proof of conscientiousness as in intelligence, and often not at all interested in subject matter mastery.
At higher levels, the top institutions sell networking as much as anything else, and that's a very sticky equilibrium - one of the main benefit of a Harvard grad is that they know other Harvard grads, and that's valuable because all the top places hire Harvard grads, so their network makes them valuable enough for all the top places to want to hire them
Yes, and this is why I suggested that the price of the service would be correlated to its reputation: big companies accepting it = reputation. I guess it would need somebody with deep pockets to help it survive the early low-reputation phase.
Obviously, my proposal is not intended for people who have the means to make it into (and through) Harvard. I went to an ordinary university, and the amount of networking I did there was basically zero. (edit: except the internship program - the job I got out of that lasted 7 years.)
What's the textbook example of how to tactically use tanks in warfare? I'm thinking a WW2 battle or something were tanks saved the day, did lots of the special tanks things that can't be done by artillery or motorized infantry or whatever, and then some colonel wrote the book on it.
I'm hearing a lot from armchair generals on how to not use tanks ("Don't use them against other tanks", "Don't use them in urban areas", "Don't use them without infantry support" etc.), but I don't hear much about how they are supposed to be used, thus my question.
If you want to ague that tanks are obsolete in modern warfare this is your spot as well I guess.
If your enemy has a tank (or worse, a bunch of them) somewhere, and you don't have anti-tank forces nearby, then the tank can destroy whatever stuff you put through that area, unless they are well-hidden or well-fortified. And it's hard to move while being well-hidden or well-fortified, so if the tank got there first you are outta luck. Of course, you are aware of that and will not move into that area, but that means you can't have anything there.
That makes tanks basically movable walls (the size of the wall is the range of the tank of course, not the physical size of the tank). You put them where you don't want the other guy to go through - an important version of that is to cut off a force from their rear lines and force them to surrender.
The reason they are good at that is that they are armored pretty well and can fire on the move, which makes anti-tank platforms far more limited than say anti-artillery platforms (it's easier to sneak up on a self-propelled gun or destroy it after it fires, than to do the same on a tank).
So why not simply use fortification then? Concrete slabs can be moved by trucks, isn't that enough to make an area hard to move into?
First, a tank is a km-length wall that can move operationally at speeds of tens of km/h. A kilometer of wall is a lot of wall to place, or to move to a more relevant location when needed (and you want to fill in or exploit breaches quickly, which makes moving to relevant locations very important).
Second, its much harder to destroy a tank (that is maneuvering through tank-friendly terrain, not that is driving through hostile city streets) than to breach a wall, because the tank can shoot at you then move into hiding and possibly call for assistance, and a concrete wall can't.
Depends a lot on the period. You can get a good spiel of the WW1 retrospective & pre-WW2 expectations (which held up at least for early WW2) by reading "Achtung - Panzer" by Guderian, but the key doctrinal takeaway I remember would be:
-Use them in large concentrations (i.e: not in piecemeal), with motorized infantry to support and exploit their breakthrough (and absolutely not "supported" by leg infantry, which can't match their speed, and slow them down to get pummeled by artillery fire, as in WW1, or WW2 french doctrine).
-Best way to stop them is another tank (that was before air support got precise enough to pose a threat)
Supposedly De Gaulle reached similar conclusions in "Vers l'armée de métier" (or was it "La France et son armée"?), but I haven't got around to read it yet, so I can't comment.
What you're really asking for is how to do combined-arms warfare. There are very few military problems where the solution is "send tanks, just tanks", but many where tanks are a useful or vital part of the solution.
The classic use case for tanks, and *maybe* justifying a pure-tank force, is exploiting a breakthrough in mobile warfare. Not making the breakthrough itself; you'll almost certainly want artillery and infantry to help with that. But if you've broken through the enemy's defenses, you want to move fast and break things behind the lines before they can offer a coherent response, which will involve meeting engagements with elements of their incoherent response, and that's something tanks are really good at.
And for much the same reason, tanks are good at mounting rapid counterattacks in the face of an enemy breakthrough.
There are some good recent-ish examples (but not pure-tank) in operation Desert Storm, and in the Arab-Israeli wars.
https://en.wikipedia.org/wiki/Battle_of_Norfolk
https://en.wikipedia.org/wiki/Battle_of_Medina_Ridge
https://en.wikipedia.org/wiki/Battle_of_73_Easting
https://en.wikipedia.org/wiki/Valley_of_Tears
Sure, I get that you want to do combined arms warfare. But what is the role of tanks in combined arms warfare?
Why are they better than e.g. mechanized infantry at exploiting breakthroughs? Are they better armored so that they have an easier time driving past pockets of resistance? (If so, can't we just slap more armor on an APC to achieve something similar?)
What does exploiting a breakthrough actually entail in practice? Do you hunt down enemy artillery and C&C? Do you try to get the enemy logistics (by firing randomly at trucks)? Do you attack the enemy in the rear? Do you just drive as fast as you can against Berlin/Bagdad and hope to create as much chaos as possible on the way? I guess tanks need (mechanized) infantry to create a famous WW2/style pocket, but it's good to have the tanks in the front while doing so?
The Iraq tank battles just seem to be coalition tanks driving through and obliterating technologically inferior Iraqy tanks. Couldn't this have been done by infantry or air power? Valley of Tears looks interesting but the Wikipedia article is hard to parse, I'll look into it.
Mechanized or even motorized infantry can support breakthrough operations, and in some cases (e.g. when the enemy only has leg infantry), can do the whole thing.
But infantry, even mechanized, has to dismount to fight. Otherwise it's not infantry, it's just an inferior form of armor handicapped by having to haul a bunch of useless people around - and no, their shooting assault rifles out of firing ports isn't useful enough to be worth the bother.
And dismounting, fighting even a skirmish at a walking pace, and then remounting, takes time and costs tempo. Sometimes it's necessary, e.g. to clear enemy infantry blocking your advance in close terrain, but if at all possible in a breakthrough operation you want to defeat at least minor blocking forces on the move. For that, you want tanks.
As for what you do in a breakthrough, part of it trying to overrun C3I facilities, logistics nodes (not random supply dumps, but supply depots, railheads, critical road junctions etc), and artillery. I'd put it in that order of importance, but it's debatable. The other part of it is maneuvering to block enemy lines of retreat and reinforcement.
And pretty much all of it, is creating in the enemy's front-line troops and their leaders the firm perception that they don't know what the hell is going on in their rear, but it's really bad and if they don't run away *right now* they'll never have the chance. Then your breakthrough forces can ambush and kill them as they flee.
As for using infantry to "drive through and obliterate technologically inferior tanks", no. Literally no, because infantry doesn't drive, it walks. And if you're thinking they're going to drive through in their technologically superior infantry fighting vehicles, maybe but now the "infantry" isn't doing anything and even technologically inferior tanks are carrying bigger guns with longer range and heavier armor because the aren't carrying around useless infantry. Maybe you've got enough of a technological edge for that, but it's still fighting with a handicap.
If you're talking about infantry advancing against tanks on foot, no, infantry can't advance against fire like that. First, because infantry survives under fire by emulating hobbits - small, nigh-invisible, and living in holes in the ground. And second, because infantry can't fire on the move.
Even a "technologically inferior" tanks is in this context a machine gun on a gyrostabilized mount with a magnifying and if needed night-vision optical sight and nigh-infinite ammunition, with a gunner who is basically immune to suppressive fire, and capable of firing accurately while retreating faster than infantry can advance. Think first-person shooter video game - completely unrealistic at duplicating the experience of a *soldier* in combat, but pretty good for a tank gunner. Oh, and he has a big-ass cannon if he needs it.
The infantryman, is a very not-machine-gun-proof target whose attempt to advance largely voids his concealment and subterranean-ness, and sure maybe he's carrying a high-tech missile that could destroy tanks if he weren't trying to advance, but since he is it's just ballast slowing him down.
If the enemy brought *only* tanks, and parked them too close to a town or treeline or whatnot, you could imagine your infantry sneakily infiltrating into firing positions. But the enemy probably has some infantry of his own, and he's got that deployed to cover all those firing positions you were trying to sneak into. His infantry doesn't have to defeat your infantry, it just has to force it to reveal itself prematurely so that the enemy can put heavy firepower from his tanks - or better yet artillery - onto it.
This clarified a lot. Thanks!
Wait, that first line makes no sense to me, I thought (a subset of) tanks are explicitly designed to be used against other tanks?
More generally, I think tanks in open terrain beat infantry without tanks; combined arms is generally very important, of course, and the tanks can't be totally unsupported, but outside of cities it's hard to sneak up on a tank. Air superiority is of course the ultimate trump card, tanks don't beat bombers.
Historically, I think Germany's Blitzkreig of France is the ur-example of the value of light tanks and motorized infantry. The biggest advantage of tanks over artillery is their mobility, so if you want examples of things only tanks can do look for maneuver warfare.
It's a misunderstanding/oversimplification of American armored warfare doctrine during WW2, which emphasized tank destroyers (typically in battalion-sized units attached to infantry divisions) as a defensive counter to massed enemy armored offensives. The misunderstanding comes in reading this as saying that *only* TDs should fight tanks. Tanks were seen in this doctrine as being perfectly capable of fighting other tanks. The actual point of the TD emphasis was that the design differences optimized TDs for a defensive response role, while tanks were optimized for breakthrough and exploitation: American TDs of mid-to-late WW2 were somewhat cheaper, a bit faster, and mounted heavier guns than tanks of the same generation, at the cost of substantially lighter armor, making them better at responding to enemy offensives and fighting defensively from cover with infantry and artillery support, but much less survivable on the offensive. This, TDs were the first-line response in support of infantry for enemy armored offensives, freeing up tanks for things they did better than TDs.
When the doctrine was first developed, the intent was for TD to be a lot cheaper than tanks, initially conceived as light infantry-support antitank guns towed or mounted on jeeps or light trucks. This way, you could have TDs everywhere you needed them for a fraction of the cost of tanks. But by mid-war, TD designs got more capable, more tank-like, and correspondingly more expensive so an M4 Sherman tank wound up being only a little more expensive than an M10 tank destroyer.
I think tank destroyers technically aren't tanks.
What would have happened if the Germany had done the invasion of France with 50% less tanks and more motorized infantry instead? Would it have been worse, and if so why?
This is a random place to post this but I asked a question like this before on an open thread and Scott responded. Any help appreciated from anyone with relevant medical industry knowledge.
I am certain I have ADHD. It hugely affects my job performance and I'm constantly worried about getting fired. I'm hoping to get prescribed adderall. I'm wondering what the chances are with my current planned process:
-I have made an appointment with a telehealth psychiatrist through some large online group.
-This site specifically says they themselves do not prescribe drugs like Xanax and adderall, but that if they think it's necessary they can fax your PCP to have them make a prescription.
-I have made an appointment to see someone as a PCP next week, two days before the psychiatry appointment.
-This appointment is my first time meeting them and I said I wanted to talk about ADHD in an office visit
-But they are a Nurse Practitioner, so I have no idea if they're allowed to prescribe anything/more hesitant.
Checkout Ahead (helloahead.com). The PNP I work with prescribed me Adderall. No contact with a PCP (which I don't have). They also manage my anxiety meds.
Some NPs are not allowed to prescribe certain drugs. It should be fine for you to call and clarify with them, asking them if you come with a referral will they be able to prescribe for you. Congrats on taking steps towards treatment.
The authors of Meta-analysis studies should be required to present a table listing the included studies and excluded studies along with the exclusion criteria that each of the excluded studies failed to meet. That is all.
Actually, I lied. That is not all.
They should also list their reasoning behind their inclusion/exclusion criteria!
Just finished reading two meta-analyses with differing conclusions, and I've come to the conclusion that I'm no closer to the truth than I was before this exercise. I don't know if one or both of the authors are trying to pull a fast one. And without being able to look at the studies included in the meta, I cannot judge the validity of either of the metas.
Wow, that's irritating. They put "Meta" in their title and become arbiters who can't be challenged. (Sort of like freakin' Facebook changing it's name.). What field are these articles in? I've read a dozen or so psychology and neurology meta-analyses recently and they all really spelled out their criteria for judging study quality. Many did not list all the articles they considered, but in many cases that seemed reasonable -- there were thousands. Maybe writers of metas should be required to publish that info in an appendix, though.
It was two meta-analyses of long-term COVID. Actually, they didn't have opposite conclusions—just different conclusions. They spelled out their criteria, but it was what wasn't mentioned that made me unable to compare the two. The first M-A's criteria was only studies published in the English language. The other one didn't mention that criteria, so I'm left wondering if they had a more diverse pool of international studies, and that that may account for the difference in conclusions.
Also the first study had a criteria that each study have a minimum sample size of 30. The other study had higher sample size criteria. But I'm left wondering why the first study was OK with 30. Seems too small to be statistically valid (?). And how many of those studies in the first one had sample sizes less than 100? Aarrggghhhh.
About study size: the smaller the study, the larger the effect size has to be to capture it. If I wanted to find out whether capybaras weighed more than hamsters, and compared 2 groups of 15 randomly selected members of each species, I would definitely find a statistically significant difference in weight. If a small study finds a statistically significant difference, then you can trust the result (unless there is some other problem with the study design -- such as wrong statistical methods done, groups compared were different in important ways beyond the one you were checking for). However, if small study finds no effect that may be because effect doesn't exist or just because study was too small to have enough power to capture the effect.
About getting info about long Covid: Epidemiologist Jetelina, on Substack, just put up a series of posts on the subject. So this is sort of her unofficial meta-analysis. I trust her, mostly. She’s smart and thorough and seems to have no ax to grind.
Ha! I got kicked off her Facebook group. It was early in the pandemic and I was questioning the some of the consensus wisdom of epidemiological theory. I thought I was polite, but I got exiled from that FB group. OTOH, I can be rather outspoken so maybe I deserved it. ;-)
Wow. Well, she does have a bit of a kindergarten teacher quality to her -- sort of pathologically nice and hyperconventional. If you can still stand to have anything further to do with her, her Substack blog seems pretty good to me -- informative, and politics-free.
As for getting kicked off things -- I'm a member of the too-rude-to-remain club too. Spent a year on Twitter, driven crazy by snark and trolls even though I only followed science writers. One day a red-state male troll started dropping turds on a thread about some technical virus thing, so I came back with the term most likely to offend his demographic: cocksucker. Now I'm banned from Twitter and glad of it. Heh.
Should I be concerned about endocrine disruptors from microwaveable plastic bags of vegetables e.g. steamfresh?
McDonalds leaves Russia?
Pravda web site says good riddance.
‘ McDonald's sells "food” that is absolutely impervious to rot and decay. You can buy one of their hamburgers, put in on a shelf in your living room and just leave it there. After a year the burger will still look and smell the same. None of the rodents that you unwittingly share your house with will have deigned to touch it. Nor will any insect, no fly no wasp, nothing. Even bacteria will stay away from McDonald's products. This will give you an idea of the quality of American fast food. KFC specializes in products made from bio-engineered, hormone and antibiotics-fed chickens growing so fast they never learn to walk. The meat from such creatures will probably help accelerate your transition from "cisgender” to anything in the LGBTQ spectrum, whether you want it or not. Starbuck's specializes in something it dares to call coffee but that anyone who really knows and likes coffee will shun.’
Well, if you skip the LGBT thing, are they wrong?
I never felt "wow this is quality food" while eating McD, it always tastes like some kind of space colony faux food that's supposed to remind me of what they used to eat back on Earth, but fails at it.
No. It’s not exactly fine dining.
> None of the rodents that you unwittingly share your house with will have deigned to touch it
McDonald's, KFC and especially Starbucks may be terrible, but I'm glad I live in a country where it's not simply taken for granted that every house has rodents.
It’s a classic case of ‘those grapes were sour anyway’
https://dailyhive.com/vancouver/mcdonalds-russia-rebrand-uncle-vanya
"Degenerate Western food makes you trans" (presumably in contrast to virile, natural Russian foodstuffs) is a take so cartoonishly anti-Woke I didn't think pravda.ru would actually publish it.
That sort of stuff is aimed at Poland, Czech Republic etc. Russia is the defender of traditional values. It’s an on going thing with Pravda. Trying to appeal to East European NATO countries. I know. It’s all so very strange.
Rodents? This situation in Russia is grimmer than I would have thought.
Probably just the odd capybara in the pantry here and there.
The grammar is pretty good in this one. I’m guessing it was written in English by a human. The machine translated stuff usually makes a hash of idioms. The underlying differences in grammars show through too.
What percentage of people who regularly attend KKK meetings probably have diagnosable mental illnesses?
Are regular KKK meetings actually a thing? I was under the impression that the KKK doesn't meaningfully exist any more. This isn't Robert Byrd's day.
The ADL (whose incentives certainly run towards maximising rather than minimising the extent of Klan activity) most recently https://www.adl.org/education/resources/reports/state-of-the-kkk reports the existence of thirty groups claiming to be the Klan, but most of them are just a handful of people and they tend to pop in and out of existence rather rapidly as people lose interest.
The biggest remaining group seems to be the Loyal White Knights https://www.adl.org/resources/backgrounders/loyal-white-knights-of-the-ku-klux-klan who apparently have about a hundred members. But they seem to be pretty geographically spread out, so I have my doubts whether they have anything you could call a "regular meeting".
Based on the abstract, this study looks like it attempts to answer that question:
https://journals.sagepub.com/doi/10.1177/0002764219831746
Sadly, the actual results seem to only be in the article body, which is behind a paywall. Unless someone here has institutional access to Sagepub or JSTOR and feels like reading it and summarizing for us.
It appears to be open-access on ResearchGate: https://www.researchgate.net/publication/331428143_The_Problem_of_Overgeneralization_The_Case_of_Mental_Health_Problems_and_US_Violent_White_Supremacists
I'm too much of a basket-weaving major to confidently give a reading of this, but here's some cliff notes until someone more qualified comes along:
- Sample size of 44 ex-extremists undergoing long interviews
- 57% reported mental health problems before or during extremist involvement
- 62% attempted or seriously considered suicide
- 73% reported substance abuse issues (64% prior to age 16)
- 59% reported family history of mental health issues
The conclusion says "so far the field says the rate is no higher than average, but we found it's actually pretty common".
I don't like that study because it focuses on "former VIOLENT U.S. White supremacists" (emphasis mine). I'd rather see the data for the KKK members who don't actually act on their racist beliefs by attacking nonwhites (they probably form the group's majority).
My hypothesis is that, if you're so racist that you're actually willing to go to KKK meetings, you're probably mentally ill.
Could work the other way round, too: if you're so mentally ill that the only group that will tolerate you is the KKK
What would those people’s diagnosis be?
I suspect a lot of depression, antisocial personality disorder, and PTSD.
Seems like a fair comparison group would be other extremists, both left, right, & totally disaffiliated -- like test the people in Anonymous (if only they weren't all anonymous!). It may be that unhappy and desperate people are drawn to extremism, and/or extreme views create desperation. If some of the wilder apocalyptic theories were true, suicide might be a rational choice.
https://www.youtube.com/watch?v=rfjZgdQsr6s&ab_channel=RogerJamesHamilton
From 2020-- predicts the invasion of Ukraine. Assumes that dictatorships can play a long game, and win. That part might be wrong.
Notably, he says that Alexsandr Dugin ("Putin's brain") wrote a book in 1997 called "Foundations of Geopolitics: the Geopolitical Future of Russia" which is Putin's playbook and can be used to understand and predict Putin's moves.
The book's 40-year plan:
Step 1. Invade Georgia
Step 2. Annex Crimea and control Ukraine
Step 3. Separate Great Britain from Europe (Brexit?)
Step 4. Chaos: sow division in Britain and the US
Step 5. Create "Eurasia" which (based on the map) looks like basically Russia surrounded by "buffer states", with China "divided and in turmoil", and Japan and India as allies of Russia (I note that while Japan voted to condemn Russia's invasion, India was neutral and is now setting up a special payment system to avoid commerce interruptions caused by sanctions. Evidently Russia bagged China as a Russian ally instead of Japan. My impression is that while China isn't completely sold on the invasion yet, it is spiritually siding with Russia and the reason it isn't doing more to help Russia is that it fears "secondary sanctions".)
Steps 3 and 4 involve using the 3 Ds, Deception, Destabilization and Disinformation, to create internal divisions in Britain and the U.S.; internal divisions in the U.S. are to make the U.S. more isolatist and distant from Europe (hence Putin's support for Trump, who in turn pulled out of multiple international treaties). Also the book calls for a "Continental Russia-Islamist alliance [as] the foundation of anti-Atlanticist strategy" (hence Russia's ties to Iran & Syria)
He also has a video about China's master plan: https://www.youtube.com/watch?v=WaAOss6W1u0 - this video begins by telling me that China already beat the U.S. on the metric of "GDP by PPP (purchasing power parity)" in 2014, though note that the per-capita *incomes* of Chinese people by PPP are just over one-fourth the incomes of U.S. people. Which itself is probably part of the plan... to sacrifice income for more GDP and more power on the world stage. China seeks world domination, and on the economic front, they seem to be ahead of schedule.
Oh, now that makes me wonder if some Chinese policy wonk read that book and decided that instead of letting Russia be Ruler of Eurasia, with Japan as an ally and China internally divided and in turmoil, it would be smarter to cosy up to Russia, sell themselves as an ally, and remain a major, non-conflicted, partner if or when Eurasia is a thing that happens.
That would explain (to me) why China is lining up with Russia right now, instead of sitting back and seeing how things play out. Even if Russia manages to shoot itself in both feet with Ukraine, China can still be the "I'm your friend, see how I supported you?" partner and be in a good position to gather up the fragments from the fall-out if Russia instead starts falling apart with internal conflict.
Meanwhile, in this video he predicted a near-term financial crisis which didn't happen (we just got some inflation, and if there's a crisis now I think it'll be triggered by Russia) - I suspect that he doesn't understand macroeconomics well enough (which is not unusual; my impression is that even economists themselves have multiple incompatible models that make different predictions): https://www.youtube.com/watch?v=EYOVoQT2yQg
In his otherwise reasonably accurate 2020 prediction video about vaccines, he characterizes what sounds like it should have been a crisis in 2021 as "Massive wave of bankruptcies, unemployment rise again, and debt bubbles bursting": https://www.youtube.com/watch?v=yahfx_JIihQ ... looks like he overweights the importance of debt and QE https://www.youtube.com/watch?v=nUOVRo_EIrE ... whereas my model is closer to market monetarism: I do not find debt to be important except indirectly, and I expected a "market adjustment" but no crash.
I have a friend who's in the market for a dating coach. He's in the general Astral Codex Ten audience demographics - about 30 years old, tech professional, generally liberal. He has been unable to find a good option with experience working in those demographics. Does anyone have a person to refer him to?
I would advise finding a local one. Dating advice changes quite significantly depending on country or even city. A local coach knows local quirks and also good spots/locations to meet people.
Also check the coaches age. A lot of them are mid 20s or even younger. The game works differently for 30+. A 22 year old won't give you useful advice for your age bracket.
If your friend is into online dating I might give some pointers on optimizing his profile. Been doing a lot of work on this topic for a machine learning project I'm working on.
What are your recommendations on profiles?
In general the youtube channel "School of Attraction" has a lot of online dating advice. Also search for "Reddit Tinder Guide". There are multiple good ones.
More than that we need to go into the specifics of the actual profile. I'm pretty new at substack and don't know if there is any kind of personal message. But if you want you can contact me and i can give you/your friend specific advice about the profile(s)
I've heard good things about Relationship Hero (disclaimer: from the person who founded it).
I’m going to assume that date coaching is a thing now. It wasn’t when I was single.
Is your friend very shy? That would make it harder. If that is part of the problem, he could try getting regular exercise. Cardio and weight training relieve anxiety and help with self confidence.
If he is up to it, being able to dance a bit would give him a chance to meet potential partners.
My friend is not shy, and has been in the dance scene for years to no avail. I think he knows he has a problem that needs personal coaching attention.
I grew up in Russia (but left to the US as a teenager, on my father's H1B visa). This means I still know a number of people in Russia who are disproportionately techy, and statistically I'd expect some of them to be interested in no longer being in Russia right now. (Some will have left already, some will want to stay no matter what.) Is there a more effective way to look for jobs that might sponsor them than "ask your company if they'll sponsor a visa, ask your friends to ask their companies if they'll sponsor a visa, etc."? Also, consider yourself asked :)
I work for a biggish consulting company and our policy is to sponsor advanced degree holders (MBAs, PhDs, MDs etc.) coming in at the consultant level but not for analysts who usually come in at the undergrad (BS/BA) level. That's for the U.S., not sure what the policy is in our international offices. I've also asked about expanding sponsorship to further down the ladder and it's definitely being considered but I don't think that policy is likely to change at least in the short term.
If you know anyone who might be interested they can reach me for more info at gbz.uraarffrl@tznvy.pbz (rot13)
Note that I asked this again lower down in the advert post, and they (Dave92f1) said that they're willing in principle but haven't done it in practice, and also that they'd consider remote.
Also, if you want to send a resume my way, my company (http://www.cyberoptics.com/) is looking for at least one software person and at least in principle willing to sponsor people; I'm lastname at gmail.
Scott, thank you for helping set me on the path towards effective altruism. Your writing was deeply influential to me in high school and early college, and I think it was a really big part of why I got into EA (where I get a lot of self-esteem from these days). Since I think its relevant, I'm a senior software engineer at a FAANG and I donate around 30% of my pre-tax income, so include some fraction of that in your total impact!
[Context, I'm reading through 'What got you here won't get you there'. It recommends thanking the top 25 folks most influential in your professional life. Scott handily qualifies for me.]
Just want to say that donation of 30% of pre-tax income is pretty darn impressive.
Looks like you are really walking the walk.
Good for you.
Thanks! I really appreciate the encouragement; I don't have many EAs in my social circle so the positive reinforcement is rare and valued :)
You're welcome!
Seconded. I'm in the middle of a career switch from lucrative but soulless software dev to medical bioinformatics that's socially useful, very interesting, very frustrating, and paying next to nothing.
Scott's writing, especially UNSONG, has been one of the main things that pushed me to finally do it and ruin/fix my life.
Congrats! I'm almost surprised you managed to find a software related job with poor pay :)
A Harvard Business Review reports that 'study after study puts the failure rate of mergers and acquisitions somewhere between 70% and 90%' (https://hbr.org/2011/03/the-big-idea-the-new-ma-playbook).
Has anyone seen studies citing a failure rate of one country invading another country? It seems like this might help forecasters take the outside view of Russia-Ukraine. I can't figure out the right search terms.
I expect a wider range than the 70%-90% for M&A failure. One reason is the difficulty of identifying invasions due to proxy warfare. (Should the Bay of Pigs landing by anti-Castro Cuban exiles be classified as the US invading Cuba by proxy, or an abortive civil war?) Another reason is the difficulty of defining failure, since political goals are harder to evaluate than corporate profits/losses.
The rates probably vary by technological era, as new weapons make offense or defense easier.
What are your favorite pieces of fiction from Scott?
Personally -- even though it seems crazy to pick this one since I'm sure it was low-effort compared to many other stories -- I would have to say [and I show you how deep the rabbit hole goes](https://slatestarcodex.com/2015/06/02/and-i-show-you-how-deep-the-rabbit-hole-goes/). I think the ending is something like the greatest thing ever. Also very fond of [Sort by Controversial](https://slatestarcodex.com/2018/10/30/sort-by-controversial/) and of course UNSONG.
Looking at the [#fiction tag on SlateStarCodex](https://slatestarcodex.com/tag/fiction/), I also realize that there are some I haven't even read yet.
I have a theory that Scott pseudonymously wrote an alchemical allegory disguised as a bad Harry Potter fanfic, but nobody got the joke, so he had to write a whole essay explaining it. If true, that is my favourite.
"Chametz" was fun! And I very much enjoyed "Ars Longa, Vita Brevis"
https://slatestarcodex.com/2017/04/13/chametz/
https://slatestarcodex.com/2017/11/09/ars-longa-vita-brevis/
Was just thinking about World War II is Not Realistic, not sure if that counts as fiction per se but it's got to be in my top 5 or so http://web.archive.org/web/20160908033911/http://squid314.livejournal.com/275614.html
Reading "Sort by Controversial" and the comments thereof made me learn the origin of the phrase "not by one iota", which is now one of my favorite facts. Is this what people learn in Sunday School? Why did no one tell me?
> The First Council of Nicaea in 325 debated the terms homoousios and homoiousios. The word homoousios means "same substance", whereas the word homoiousios means "similar substance". The council affirmed the Father, Son, and Holy Spirit (Godhead) are of the homoousious (same substance). This is the source of the English idiom "differ not by one iota." Note that the words homoousios and homoiousios differ only by one 'i' (or the Greek letter iota). Thus, to say two things differ not one iota, is to say that they are the same substance.
https://ksuweb.kennesaw.edu/~tkeene/ogtHomoousios&Homoiousios.htm
Except, is that etymology true? The online dictionaries I check don't mention it as an origin for that meaning and instead say it's from iota being the smallest letter and therefore almost insignificant. (The do mention that 'jot' derives from this, as iota also is transcribed as jota.) And Wiktionary quotes it as being from the New Testament "until heaven and earth pass away, not an iota, not a dot, will pass from the Law". So that predates Nicaea (I presume?).
So, uh, is this a scissor statement? Discuss at your own risk.
One piece of evidence against this etymology - Hebrew has an expression "On the tip of a yod", which means "decided based on a really tiny difference between two otherwise equal things", which feels like the same expression. Yod is the Hebrew alphabet version of iota and is just a really small letter (אבגדהוזחטי - yod is the little one on the left). Iota is also a pretty small letter. So if "on the tip of a yod" and "by one iota" have the same origin it was probably from the graphics of it rather than some complex greek spelling.
Something I thought about just now - Hebrew narrowly missed a chance to have a letter called 'Yoda'
I just followed this link to reread the pills story and it struck me that of course William van Orange
I had missed this one.
A correspondent on the ground reports that Elon Musk has challenged Vladimir Putin to single-combat. https://twitter.com/antoniogm/status/1503392910267531271
This is, of course, lunacy.
This is kind of boring except for the fact that the richest man on the world is saying this stuff (though that novelty wore off long ag)
If there's any skill of Putin's I don't doubt, it's hand to hand combat. Musk would only have a chance because he's challenging a 70 year old man, so it's a lose-lose situation for him - either beat up a harmless grandpa or even worse, get beat up by a harmless grandpa that happens to be a sambo black belt.
Putin won’t be 70 till October. :)
Edit
I bet Putin learned how to fight dirty but effectively as part of his KGB training.
As I understand the customs of dueling, Putin would now have the choice of weapons.
Mano a mano because of his judo experience? Nerve agent soaked rags? Hockey sticks?
Putin will select a capsule-shaped object whose purpose and usage is known only to himself and a couple of guys in the FSB who couldn't figure out how to leak the info to Musk before the fight. RIP Musk.
Rockets.
Putin may be granted the courtesy of making the choice, but the choice is a Hobson's choice.
It must be chess.
Well, Putin theoretically has the power to stop the Invasion of Ukraine, but Elon certainly doesn't have the power to... force Ukraine to surrender? ... so this simply doesn't work even in principle; Putin doesn't have anything to win.
I ran across this the other day. The wonderfully weird pigeon guided bomb project run by B F Skinner.
https://en.m.wikipedia.org/wiki/Project_Pigeon
Is heavy meat eating in humans an adaptation for famine resistance?
There are periodic droughts, blights, etc that hurt crop yields. If the crops that are grown all go to feeding people, then any drop in yield means someone goes hungry. Meat eating provides resilience.
1.) Some animal feed (corn, turnips, etc) is also edible by people. In times of famine, humans can eat this. Animals go hungry (or production decreased) instead of people or switch to non-human edible food (like grass).
2.) Animals can be slaughtered during times of famine. By killing animals early or killing animals kept for eggs or dairy, additional calories can be gained in the present.
Right now we overproduce food (as measured by calories) and invest the excess in producing meat/dairy/eggs. As society moves towards less animal based food, are we going to get rid of our safety margin? Are we going to become more vulnerable to famine?
I think it's mostly an adaptation to A: humans evolving long before agriculture, as hunter-gatherers, and B: agrarian humanity finding itself on a planet with a lot more mediocre land suitable for grazing livestock than good land suitable for growing grain.
I'm not convinced that "society moving to less animal based food" is an actual trend. While it may be a trend in the particular geographical areas and social classes in which ACX commenters tend to move, I think this trend is more than cancelled out by the billions of people slowly moving out of poverty and finding themselves able to afford to eat more meat. In China, for instance, annual meat consumption has gone from 10kg per capita to 50 kg per capita since 1980: https://www.researchgate.net/figure/continued-growth-projected-in-chinas-per-capita-meat-consumption-source-usda_fig5_321111368
In the US, meat consumption has been pretty much flat since 1999 at 265 lb/person. It decreased a little with a price spike around 2010 but has now recovered: https://www.researchgate.net/figure/continued-growth-projected-in-chinas-per-capita-meat-consumption-source-usda_fig5_321111368
It's unlikely to be a deeply genetic adaptation, but you could see it as a cultural adaptation, in some places, in some contexts.
Meat animals could be used as a store of calories in pastoral cultures. At the same time, the animals would be used to turn poor land into usable calories in the first place - i.e., having a flock of goats pasture in the inarable, rocky scrub of the near east or herds of cattle ranging across the dry American west.
More typically, though, food preservation was the buffer against famine. Animals only live so long when you don't feed them.
So people learned to store grain, ferment sugars, salt meat, make cheese of milk. We still do some of these things, and we also can and freeze and so on. In the case of famine, we would still mostly rely on these methods - and would likely devote less calories to meat production to re-establish a buffer going forward, but that wouldn't change our food supplies then.
Our food buffer is HUGE in historical terms. We just produce a crap-ton of calories with modern agriculture. So no, I don't think we're becoming more susceptible to famine. Say what you will about it from a gustatory perspective, a sack of rice, cans of beans, and a pallet of spam tins keeps a heck of a lot better in a basement than a live cow.
No genetic adaptations!? Teeth shape and size are genetic adaptations, and humans have omnivore teeth — sharp front teeth (incisors and canines) to rip and cut meat as well as flat molars to crush plant material and chew meat. Indeed, the acquisition of fire between 1 and 2 million years ago (depending on which group of paleoanthropologists you listen to) resulted in the more efficient digestion of animal protein and very likely affected the shape of our mouths as well as the shape of cranium.
Despite PETA claims that human digestive systems are that of herbivores, we don't have those specialized digestive sacs that herbivores have evolved to temporarily hold plant material (along with the specialized gut fermentation bacteria) to ferment that plant material. Ruminants like Cervidae and Bovidae do their fermentation in forward sacs, whereas Equidae, Rhinocerotidae, and all of the Cercopithecidae (I think) have posterior hindgut sacs. Omnivores and carnivores lack those specialized storage and digestive sacs to ferment plant material — as do humans. Also, humans, like other other omnivores, have intestinal tracts are in between the length carnivores and herbivores.
Humans share all those traits with the other primates, which, in general, obtain the bulk of their calories from plants, not meat.
Gorillas are nearly vegan. And yet, gorillas have sharp front teeth (much sharper, in fact, than ours!), and guts like our guts, without any of those specialized sacs.
One could make a similar point, by the way, about the famous frontally placed eyes, often mentioned as evidence that humans evolved primarily as hunters. All apes have them. And yet, gorillas don't hunt.
I think the main way in which the acquisition of fire matters, and may have driven changes in our anatomy, is not that cooking allows you to digest meat, but that it expands the range of plants you can eat, to include starchy roots, grains and legumes, which grow in the wild.
You can eat meat raw, but you can't obtain many calories from a raw potato. Present day adherents to "raw" (in the sense of uncooked) diets can eat any meat, but the range of plants they can eat is limited.
There is much evidence of consumption of wild grains before there was agriculture, and of course foragers eat wild starchy roots. This would have needed fire.
I don't mean to argue that human beings are meant to be literally vegan, don't get me wrong.
Yes, agricultural humans get most of their calories from plants, but for non-agricultural societies that's not necessarily true. For instance coastal human societies get most of their calories from fish, seafood, and marine mammals. Of course, the Inuit don't get much in the vegetables, but fisher societies like the Kwakiutl of Northwest survived on dried fish (fire required) and seals for most of the winter season. Their diet was supplemented with berries and nuts in the summer, but by far their largest caloric intake was from animal protein.
Archeological evidence shows that coastal humans have been exploiting the high protein resources of littoral regions for hundreds of thousands of years — even before modern humans — at least back to 200kya. Plus shells were traded inland as decorative objects as early as 60kya, and probably earlier.
In more modern times, cattle herders of the Southern Sudan and all along the Sahel, have a very high protein diet with lots of dairy in it. Some millet and maize, but vegetable diets are not certain in semi-arid and arid areas. Before cattle domestication, savannah dwellers definitely followed and hunted game, and their diet likely had a very high protein component. The same goes for peri-glacial dwellers in Europe where we at least one example where hundreds of mammoth were killed and butchered in a single event. It's estimated that several hundred or even a couple of thousand people would have had to participated in this hunt, and the meat was probably smoked and preserved and would have been the chief component of their diet through the long northern winters.
In the tropics and in highly fertile regions, a high-protein diet was less needed. Modern rainforest indigenous peoples have high carb diets supplemented by animal protein. It's believed that pre-agriculturalists of the middle-east survived off an abundance of seasonal plant sources.
As for your comment below about Gorilla teeth, proportionate to mouth size Gorilla molars are like 2x the size of human molars. All the better to chew uncooked plant materials. And the incisors are much larger than human incisors, but according to Dian Fossey, they are well-designed for pealing bark off trees. I don't know much about Gorillas, my training in primatology was only focused on human ancestors.
Granted fire helped to cook plant materials for humans, but there were vast tracts of the planet where humans lived where plant resources were not enough to survive on year round.
>Gorillas are nearly vegan. And yet, gorillas have sharp front teeth (much sharper, in fact, than ours!), and guts like our guts, without any of those specialized sacs.
"While gorillas are genetically similar to humans, they have very different digestive systems—more akin to those in horses. Like horses, gorillas are “hind-gut digesters” who process food primarily in their extra-long large intestines rather than their stomachs. "
https://www.theatlantic.com/science/archive/2018/03/gorilla-guts/554636/
I just meant that gorillas do have the particular features that Beowulf claimed are, in humans, adaptations to meat eating (sharp canines and the lack of cow-like stomachs).
As for the fact that our guts are smaller, I think it would be misleading to describe it as an adaptation to meat eating.
Instead, it represents, more generally, a shift away from fiber as a calorie source, and towards fats and carbs.
Apes such as gorillas can obtain lots of calories from the fiber in foods that don’t have so many carbs or fats in them, because, in their guts, fiber ferments, generating calories.
We can’t live on high-fiber, low-carb, low-fat foods; we live on foods high in fats and/or carbs. This doesn't mean "meat"; it means meat, fruit, nuts, starchy roots, grains, and legumes.
All the foods I just listed are available to foragers (contrary to "Paleo" myths).
In particular, starchy roots, grains and legumes require cooking for our digestive system to be capable of extracting the carb calories in them. This isn’t just because those plants are “hard to chew”. If you gulp down raw, uncooked flour, you won’t get many calories from it.
The discovery of fire, by allowing us to extract calories from such starchy plants, must have encouraged, and at least partly explains, this revolution, the shrinking of our gut and the change in what we use as fuel from fiber to fat and carbs. Because of fire, we became much better at living on wild tubers that other apes are, and this must be at least part of how we could afford to give up the ability to turn cellulose into calories.
So I think that the shrinkage of our guts isn't exactly an adaptation to meat eating, but more generally an adaptation to a whole range of foods.
Sorry, I was unclear; was responding to: "Is heavy meat eating in humans an adaptation for famine resistance?"
Humans are clearly genetically adapted to eat meat. What they likely aren't is genetically adapted to eat meat [i]specifically as a famine resistance technique[/i], except insofar as eating [i]anything[/i] is a 'famine resistance technique lol
(Edit: how the heck do you do italics in these comments?)
Ahhhh. OK. But, yes, I'd say a diversified portfolio of cultivars and domesticated food animals would provide some level famine resistance. The Irish Potato Famine comes immediately to mind as food monoculture that failed. Granted, the Irish population leading up the famine was so dense, that the average farmer didn't have the option of cultivating acres of wheat and large fields to support dairy cattle. Potatoes were the optimum solution for small plots of land—until the blight hit.
I don’t think you can do italics here. Pretty sure most of this blog’s readers auto-render html tags though. ;)
Animals are a less efficient way to produce calories. A given unit of land produces more calories with crops than with animal agriculture. Look at societies that are actually still vulnerable to famine: meat is a luxury.
So no, we're not somehow becoming vulnerable to famine because of veggie burgers, even in theory.
This is only because you're looking at through the lens of modern agro-business practices. Cattle can survive and even thrive in high desert environments (at least ones that have bunch grass). A rancher I spoke to in eastern Oregon explained that depending on the aridity and the grass density, it takes between 1 and 2 acres of land to support a single steer. Most ranchers round them up and ship the off to feed lots when they're 16 months old (if I recall), but before they're fully grown, to speed up the beef production cycle. But this rancher raises them until they're adults (2 years?), and slaughters them then. Fully grass fed. No growth hormones. No antibiotics. Raised on land that is too dry for regular crops, and too hilly for irrigation.
Iceland has a lot of sheep, which are migrating, self-feeding, and iirc return to their herders only for the winter. Outside of the kinda-fertile region around Reykjavik, Iceland in the summer looks like a sci-fi barren planet. The sheep don't mind.
Yes and no. If you look at the animals humans have domesticated, you will notice one thing about the vast majority of them: they eat something humans can't or won't.
The two big candidates are "cellulose" (cows/sheep/goats/horses/camels/water buffalo/donkeys/geese?/llamas/alpacas/rabbits/ and "vermin" (cats/ducks/chickens). Fish, which we haven't generally domesticated but hunt in massive quantities, also eat cellulose (i.e. algae) at some degree of directness (the trouble there is that many of the things that eat algae are themselves too small to eat).
*Grain-fed* cattle are a luxury, but meat and dairy in general frequently aren't (the traditional Mongol and Eskimo diets are nearly 100% animal). And even in the modern day, a grain farm does produce a lot of otherwise-useless plant matter.
That's true if you have land that could be used to produce plant-based food for humans. I believe that if you raise livestock on land too poor for crops, and by feeding them food waste, then they're pretty efficient at turning 0 calories into some calories.
That is true of sheep farming. They are ideal for hilly, cool, wet areas that would be impractical for crops.
That's definitely true of chickens, and sometimes true of cattle, etc. Water use, however, also needs to be counted. Also rabbits have been used in that way, though IIUC the process was a bit labor intensive.
Agriculture has always had higher water use than herding. Historically, there either had to be enough rainfall to support a yearly cycle of planting and harvesting, and, if not, irrigation had to be implemented. So agriculture clung to areas with fertile soil and plenty of water. Meanwhile nomadic herders occupied the (a) high steppes, like central Asia (where agriculture was impossible until modern grain cultivars were developed), (b) arid and semi-arid areas, like the Sahel, or (c) mountainous areas which were too difficult for terraced farming.
There have been arguments made that modern beef farming is water intensive. It need not be — if it weren't for the economics of fattening steers faster to get them to market faster. And I'm not so sure it really is as wasteful as some environmentalists and animal rights advocates claim. For instance, most all of the beef water use estimates that I've seen share the fatal flaw of assuming that corn (maize) kernels are the main component of the silage that cattle eat in feedlots. This ignores the fact (either out of ignorance or intent to deceive) that silage consists of the entire leaves, stalks, as well as the ears of corn, that are chopped up and allowed to ferment in silage tanks. So a steer is consuming entire corn plant (except for the roots). That may actually increase the water usage of feedlots, but it also means that cattle are eating cellulose laden leaves and stalks that humans are unable to digest.
BTW: before the last round of drought in California, almond growing consumed 1/10th of California's captured water. That's between 1/4 and 1/5 of all the water used in California. And cities consume less than a 10th of the captured water. There's been lots of talk recently about almond farmers making a big effort to waste less water, but I haven't seen any numbers on the conservation savings.
Again though, is that water that could have been easily used for something else? Or are they drinking water from puddles after a rainstorm and muddy creeks, and eating plants that contain water that would otherwise be inaccessible?
Agriculture and raising livestock are pretty new, while hunting is quite old. It would be surprising to me if developments of 10 thousand years or less had enough time to make a strong selection pressure (wolves were domesticated before that, but I don't think they were usually eaten; https://storymaps.arcgis.com/stories/893c422c13424a089b781564e9f69735 says the first animals domesticated for food were sheep around 10K years ago). My intuition could be entirely wrong here, though.
I suspect that technology can make for a stronger margin than animals, particularly in being able to move food from places where it is plentiful to where it is scarce, as well as preserving food. That's costly, but so is meat. An extended, worldwide famine would presumably impact livestock as well, although we could get at least nonzero food value from marginal land and food waste, which would help. A world with 0 livestock seems quite far away, though.
There's *definitely* been adaptation to agriculture. Some notable effects:
- Various modifications to alcohol dehydrogenase to reduce the likelihood of alcoholism.
- Modifications to the ergothioneine transporter to make it more efficient in populations dependent on wheat as a primary food source (which is extremely low in ergothioneine).
and lactase persistence
"Domestication" isn't an all or nothing thing, and one could argue that raindeer herding has probably been going on as long as people lived in marginal northern areas. In that sense I suspect that chickens (i.e. "Indian jungle fowl") were the first to be "domesticated", though this wouldn't mean "fenced in and only fed what we choose to feed them", but rather "people live near flocks of proto-chickens and drive the other predators away from them". Over enough time this evolved into the current situation. (That's sort of how we supposedly domesticated the dog, also. People put out garbage and the wolves came around to scavenge from it. The ones who got along better with people were more successful scavengers.)
We aren't going to get to a world with 0 livestock. But we might eventually get pretty close. (Are animals kept in zoos livestock? When Berlin was under siege during WWII the animals in the zoo were eaten.)
This seems trivially false. Humans were eating meat long before agriculture was a thing. As society moves towards less animal based foods our safety margin will be just stored differently.
That's a bit idealized. A century ago people tried to have supplies on hand to survive a year of crop failure. In modern cities most people can only survive a few weeks, and that by going hungry. It's like JIT manufacturing, pursuit of efficiency is (intentionally) done by reducing the safety margin. (I'm not asserting that's the only way it's done, just that that is intentionally part of the methods used.)
We can afford to do this because we have global trade. My country is more than self-sufficient with food, but if all the crops were suddenly wiped out at once then we'd simply switch to importing for a while.
The Irish Potato Famine happened because you couldn't just make a phone call and get half a million tons of Idaho's finest on the docks at Cork in a week; certainly not at a price the Irish could afford.
A subtitled version of the relevant portion of Nevzorov's video (linked in 3.) can be found here: https://www.youtube.com/watch?v=OutvYSl_TLc
Where do you go for book recommendations? Looking for e.g. empirically-minded bloggers who discuss the quality of new books often such as marginalrevolution.
During 9/11, there was concern about backlash against innocent Arab-Americans. During COVID, there was concern about backlash against innocent Chinese-Americans. I haven't heard anyone worry about backlash against Russian-Americans now. Sure, people are cancelling Tchaikovsky concerts or whatever, and some people with Russian citizenship are having hard times, but no hate crimes against second-generation Russian immigrants or whatever.
Are we ignoring these now, were we over-panicking before, or is there some interesting difference between this situation and the others?
I was just talking to my coworker this morning about my concern for Russian hate increasing (and how, more broadly, the racist hate of the 20th century has been replaced with hate based on political beliefs and country of origin.)
Besides the already noted difference that people believe they can identify Arabs and Chinese on sight, but not Russians, I haven't seen anyone mention that this didn't happen to us. Sure, we're on the Ukrainians side and all, but we're not viscerally pissed off the way we were after 9/11.
There's been some of discussion of this on DSL. Being rationalist-adjacent at least, we're pretty much opposed to punishing random bystanders because of where they happen to be born, but there also seems to be relatively little of that happening, particularly at the "people getting beaten up in the streets" level as opposed to the symbolic and annoying Tchaikovsky-ban level.
Possibly it helps that almost no Americans are confident in their ability to distinguish random Russian-Americans from random Ukrainian-Americans.
The symbolic annoying Tchaikovsky bans, those are easier to target "accurately", and I'd like to see more pushback against that sort of thing.
I have seen a lot of posts on e.g. reddit condemning hate crimes against random Russian-Americans (I remember one from a week ago where someone had thrown bricks through the windows of a Russian-American owned business.)
To me the bigger difference is the behavior of big institutions. My university did not send out any email reminding us "hey don't start going and harrassing random Russians", the way my old university sent out an email saying "hey don't blame random Chinese people for covid." To be fair, this may be a difference in universities, since I switched in the last couple years. And also to be fair, all the phrasing I've seen on discussions/support/resources have been careful to phrase their offerings as for "anyone affected by" the invasion, which is wide enough to include Russians with various troubles.
The Chancellor of Texas A&M just sent out an e-mail a couple days ago saying "I hereby direct you to sever ties with Russian entities" and "The Texas A&M University System will not tolerate or support Russia in any way". I thought this was a bit drastic and cruel and tasteless, but as far as I can tell, the main job of the Chancellor is to reply-all to the "Happy Holidays" e-mail from the President with a "Merry Christmas" e-mail.
Good points all, though this just reminded me that my mother (in a European country) got called by her boss because he wanted to know if she'd heard the 'rumors' that all Russians employed by her employer were going to be fired. She told him that she has French citizenship and he shut up real quick but yeah, not great.
cynical answer, which I dobut is the complete story but is worth considering, is politics. it fits into a progressive worldview that innocent Arab or Chinese Americans would be victumized by an jingoistic and enraged America. it fits less well that Russians, who are coded as white, would face this type of discrmination, so progressives overemphsize the former and downplau the latter.
Alternativly, you could argue that Russians, being white, face less backlask, which doesn’t seem to me to be true, but should be considered.
I've seen concern about backlash against innocent Russian Americans, or Russians in other parts of the world that aren't Russia, and also quite reasonable demands to not blame Russians in general. The war was Putin's decision. There are courageous demonstrations in Russia against the war.
It's also true that there are Russians who favor the war, but even that isn't entirely their fault. They're being influenced by skilled propaganda. Some of them are close relatives of people under attack in Ukraine, and they don't believe first person accounts from their relatives.
There's a difference, but it's not very interesting. Putin is a blue tribe approved target of hate, so of course there will be no tut tutting about backlash. See also the lack of worry about backlash from climate deniers, anti-vaxxers, or conservatives generally. The fact that Putin is a mostly deserving target and russians are a relatively small and politically irrelevant group makes it easier, but mostly it's who, whom.
The obvious difference here is that this time, the cluster of people who write articles concerned about backlash against innocent X-Americans are much more on board with the "Yeah, fuck X" sentiment.
I think there have in fact been instances of backlash, threats, and vandalism against Russian Americans and their businesses, these have been reported on in the media, and it just hasn't pierced your filter bubble. See e.g. https://www.axios.com/russian-businesses-us-vandalism-threats-bdde1c92-d661-48d5-9157-c540964b00fb.html
Thank you, that resolves my confusion.
>but no hate crimes against second-generation Russian immigrants or whatever.
Well, it's hard to identify somebody as being second generation russian on sight (even first generation would only be marginally easier), and hard to identify that they're Russian rather than Ukrainian. Compare that with e.g. Asians who are virtually all unambiguously Asian (unless they're mixed race).
Though even if this were somehow happening, the obvious reason there would be little concern is that Russian are white, and the "worst" kind of white people (according to liberals). And most of the anger is coming from liberals, who are the people who would otherwise get angry over 'backlash'.
>Compare that with e.g. Asians who are virtually all unambiguously Asian
South Asians and East Asians, definitely. West Asia, less so, at least in terms of unalterable bodily characteristics; cultural factors (i.e. clothing and grooming choices) tend to be bigger issues there in terms of identification.
I hear it on a local level. For example when a Russian school here in Berlin was targeted by arsonists a few days ago or with more general hostility and attacks towards Russians.
https://www.aa.com.tr/en/europe/russian-school-in-berlin-targeted-in-arson-attack/2532183
https://www.dw.com/en/germanys-russian-community-faces-harassment-and-hostility/a-61055867
I have heard people worry about it - though I do think there are differences here.
Here in France there have already been threats and low-level violence/vandalism against Russian-coded establishments (restaurants, cultural centers, delis). As a Russian immigrant, I've personnally been asked to issue sweeping condemnations at work.
However, I agree that the fact that Russian-presenting and Ukrainian-presenting is very close, and therefore difficult to parse, and therefore maybe not as easy to politicize.
The interesting difference is that there is a lot of support for Ukrainians who look and speak the same as Russians according to everyone else. People that are inclined to (wrongly) hate ordinary Russians for this will still likely support Ukrainians and wouldn't be able to tell the difference.
I mean, the obvious difference is race, with a small side of politics.
I wonder if the type of person who would perform a dumb hate crime even has a mental image of what a Russian looks like, or would even have the inclination to do a little trolling for this particular cause.
I'm not sure that this could be answered without recourse to politics with the associated risk of causing you to wax wroth.
I mean, everyone knows why. I'm sure Scott knows why too, and is merely exercising his famous PoC abilities.
...Principle of Charity, that is. I realized that may have been a confusing initialism to use and now I've wasted more characters explaining it than I initially saved. Damn it.
>.Principle of Charity, that is
No, Scott is exercising his famous People of Color abilities, since he is a he\him masculine-presenting jew-identified person of color who has a right to question moral panics, unlike wh*te folxx.
We are doing another South Bay meetup on Sunday, March 27th, 3806 Williams Rd., San Jose, CA 5117, starting at 2:00. Details at:http://www.daviddfriedman.com/SSC%20Meetups%20announcement.html
Oh *that* David Friedman. I live in a 100+ year old house in a city named for a saint too. They do keep a guy busy. Mine is in a less temperate climate tho. ;)
Alexey Arestovich, advisor to Zelensky, predicted the war back in 2019 almost exactly play-by-play https://youtu.be/H50ho9Dlrms?t=434 (in Russian, no subs unfortunately)
Very vague question: How we can estimate my real-world impact when betting on a prediction market?
Let's say I raise a fund of 100 mln$, and then go all-in for "No" on "Will Putin resign by 1 April 2022?"
Should I expect some of his friends say to him: "You gonna resign anyway, let's at least make some money on the way out". They bet a few grand on a "Yes", announce resignation – PROFIT.
I lose my 100 mln$ (mostly), but this way I "buy" my future. (Literally buying "futures").
Sounds too naive, I know. Are there examples where this worked, in a brief history of low-liqudity prediction markets?
P.S. Idea stolen from "Assassination Politics", but I wanted to take a wholesome spin on that.
Putin and his friends are rich anyway, and if he were really going to resign it means that he's probably lost the support of his "friends".
unfortunately, Putin essentially owns the whole state of Russia, and therefore isn't motivated by money at all
Are any Trumpists admitting they were wrong about Trump, given Trump's positive views on Putin?
This isn't an attempt at point scoring. Most right-wingers here still like to talk about Russia-Gate and how that was fake. Whereas it's pretty clear that Trump was a Russian asset, maybe not in the John Le Carre sense but in the literal sense that he was an asset to Russia.
It's relevant because it was obvious to many that Trump's admiration for the clearly evil dictator Putin was abominable -- it was a main cause of so-called Trump Derangement Syndrome.
How about some ex-Trumpists now admitting they had shit-brains for judgment about these matters?
***MOD*** I am mostly offering amnesty for culture war comments in odd-numbered open threads, but this seems a bit over the top. Medium warning.
I shouldn't respond to this, but....
No, because "Trump has positive views on Putin" was just an anti-Trump meme rather than anything backed up by actual statements.
I haven't read every word that Trump has ever said, but I've read a lot of articles that lead with the headline "Trump praises Putin" but which on further examination turn out to only quote one word, "smart", out of a two hour speech. And it's always in the context of a standard Trump riff about how Obama/Biden are dumb and our enemies are smart and our enemies keep taking advantage of how dumb Obama/Biden are.
I'm talking about the words that came out of Trump's mouth and Twitter feed the 4 years he was POTUS.
I'm not a Trumpist. But I want to politely suggest that it's hard to get any accurate picture of what Trump thinks about Putin. First most of the media still has TDS, so that will give you a distorted view. 2.) Trump says different things to different audiences, so what do you take as his 'real' view? 3.) My personal view is that Trump praises dictators, because he sees that as the best way to get what he wants from them, he also criticizes democracies for the same reason. You can disagree with his approach to world politics, but I think it's a mistake to say that he has positive views, just because he praises Putin.
My personal view is that Trump admires strength, cunning, and winning by any means ... In himself and others. He praises dictators for those qualities because he thinks he is like them, and they are like him.
If you discount public statements, then it's very hard to get an accurate picture of what *any* politician thinks about *any* topic – not just Trump about Putin.
> Trump says different things to different audiences, so what do you take as his 'real' view?
What other opinions has Trump voiced about Putin to other audiences?
> My personal view is that Trump praises dictators, because he sees that as the best way to get what he wants from them, he also criticizes democracies for the same reason.
That doesn't make a lot of sense. If Trump wanted to appease dictators (and if we assume that he's competent at it), wouldn't he praise them and criticise democracy in private, to their face? Instead of alienating your base for little to no gain?
Like I said, you can disagree on his approach to world politics. I think all Trumps statements about any dictator had one audience, and that was said dictator. With this model it's much easier (for me) to understand his behavior. Understanding does not imply support or agreement with said behavior. I see Scott has added the no politics caveat at the top of this post, so we should probably postpone this for another thread.
I'm firmly in the anti-Trump camp and convinced that Trump secretly admires Putin for his attitude towards free journalism and democracy, and even I think your comment is terrible and deserves more than a mild warning – non-CW thread or not.
Twenty years in the electric chair, perhaps?
Same.
>given Trump's positive views on Putin
Positive view is not support (and, for that matter, negative view is not opposition).
There is a recurring idea that the far-right just *love* Putin and will <insert homophobic joke, but this time it's totally Ok because it's the rightists that are homos>. There may be some, but I suspect the majority of the western far-rigth have a simple respect for Putin. Respect for Strength (or the image of it), of upholding the strategic interest of his nations (rather than approval from NYT op-ed) and of traditionnal values (rather than destruction of them).
From there, you can regret the invasion of Ukraine but understand that 1- It's necessary (or perceived as such) for said strategic interests of the Russian people and 2- May be a mistake, but that doesn't invalidate anything above.
I recall an interview of Putin, some weeks before the the invasion, where he said something along the line of "I'm not your friend, I don't want to be your friend, I'm the president of the Russian Federation". Putin's role is not to be loved by the west, it's to protect Russia's future (and again, trying to and miscalculating isn't the same as what is perceived, amongst western right-wing, as the total rejection of western elites to protect western future). They don't love Putin and what he does for Russia, they love a leader who does what is best for their country (and whish they had one).
All this to lead to Trump, which may not have been exactly that, but at least a step into a different direction than the current behaviour of western elites (which could be summed up as "defect on the west at all cost").
And of course, there's the simple fact that Putin annexed Crimea when Obama was President, invaded Ukraine while Biden was president, and stayed put for 4 years while Trump was there (in fact, separatists lost ground during that time, from what I can tell). How to square that with the idea that Trump was ameniable to Putin?
I may have shit for brains, but at least my model fits verifiable facts.
How the f- do you want to say this isn't point scoring, and then call your ideological opponents SHIT BRAINS
In any case, no, you're wrong, completely wrong. Trump has condemned the invasion and his "praise" of Putin is transparently an opportunistic attempt as criticizing his political opponents, as in "these democrats aren't smart or tough enough to deal with Putin. We wouldn't be in this situation if I were still president"
But again, he condemned the invasion in no uncertain terms:
“The Russian attack on Ukraine is appalling,” he told the Conservative Political Action Conference (CPAC) in Orlando, Florida, on Saturday night. “It’s an outrage and an atrocity that should never have been allowed to occur"
To be fair to the other commenter, Trump did illegally withhold security assistance to Ukraine as part of an effort to pressure Zelenskyy to open an investigation into Hunter Biden. https://www.cnbc.com/2020/01/16/trump-administration-broke-law-in-withholding-ukraine-aid.html
Which...says nothing about being allies with Putin and everything about being a corrupt politician
Stating that Putin is 'evil' is really just a way of saying that you don't understand his motives or the political background to the current war. All war is evil, sometimes a necessary evil, but evil nonetheless.
I would suggest that the people showing poor judgement are those who ignored the warning signs from Russia for the past 3 decades.
I think we have different definitions of either "evil" or "necessity". The Donner party were not evil for eating companions who had died. That was necessary (in order to live). I would not be evil for eating an extra ice cream cone, even though it would be an extremely foolish thing for me to do. Putin, Stalin, and Hitler were/are evil. They do gross harm to others without necessity.
War is not, in and of itself, inherently evil. Intentionally starting one when you don't need to is. (War *will* inherently contain acts of evil, but I accept the possibility of just wars. I think the US entry into WWII was not evil, though the Japanese assault on Pearl Harbor was, even if they were maneuvered into doing so.)
There's certainly nothing "necessary" about this war unless you consider Russian irredentism a necessary goal to pursue.
Putin clearly wants friendly regimes as his neighbours, not that he is very likely to end up with one at this rate.
Really? Is that why he annexed Crimea? Is that why he armed terrorists in eastern ukraine? To make Ukraine more freindly?
No, he annexed Crimea to be able to maintain access to a warm water port. He armed the rebel factions in Eastern Ukraine to attempt to weaken the post-2014 coup Ukrainian government. The invasion is apparently an attempt to finally remove that government entirely and replace it with a Russian-aligned one.
I forgot the put the poltiics disclaimer on this thread, so I can't blame you for that, but this is a bit more hostile than we usually do around here. Consider yourself mildly warned.
I've seen people suspended on here for a hell of lot less than calling an out group "shit brains". I was warned much more sternly for saying somebody shouldn't comment on a scientific topic they don't know about. How on earth does an extremely strong out-group swipe like SHIT BRAINS get a "mildly warned"? The fact that this isn't a politics thread is irrelevant, this shouldn't be allowed in either case.
Agreed, even on a politics-allowed thread I would have thought that comment would get more than a mild warning. Also note OP's further comments here (e.g. "You people are disgusting. Go to hell.").
"This isn't an attempt at point scoring" "Ex-trumpists" "shit-brains"
Maybe the world would be better off at the moment if America and Russia had friendlier relations?
That ship had sailed about 25 years ago. Since then Russia has essentially molded its ideology and image into being ostentatiously anti-US and anti-West in general, and being wilfully ignorant of it these days is patently absurd. Which is of course why Putin liked Trump so much, he was willingly embarassing the US of his own accord, so helping him in this endeavor in any way possible went without saying. I don't know how impactful that help actually was, but if we're to take the US intelligence at its word, the "election interference" played a decisive role in Trump's victory, which would likely make it a bigger coup than any Cold War operation.
that ship sailed in October 1917
If the situation is so intractable, do we just cut straight to nuking Moscow? Diplomacy is a thing; in a nuclear world, it's the *only* thing.
Nah, if America was willing to do that it would've happened in 1946, with much less risk. Right now the Iran/North Korea scenario seems to be the unavoidable "least bad" option.
So Trump is inscrutable and can't be held to account for anything he says because he always says something different to a different audience....
But, of course, when it's convenient to say: "That was exactly Trump's point."
You people are disgusting. Go to hell.
Seriously Scott? You're letting people get away with this crap?
Are there any practical models for how to run a flexible, competent authoritarian government? Like in political science, organizational structure, etc. I'm not pro-authoritarianism (I swear!), I'm just sort of interested as to how across countries and cultures they keep running into the same problems- inability to deliver bad news up the command structure, lots of inefficient corruption, very suboptimal ways to transfer power when the strongman dies, unmeritocratic because loyalty is rewarded over competence, and worst of all, rigidity and inability to change over the decades as needed in a changing world. There's a reason all of the per capita wealthiest countries are democracies. This is meant not as a moralizing analysis, but just as a practical one- this kind of thing is not the optimal way to run a country! https://www.youtube.com/watch?v=ucEs0nBuowE
Do you.... have a council of various elites serving as like a board of directors, with a strongman chief? Is that the way? It seems like you'd want some sort of constitutional system where say the above intelligence chief serves at the pleasure of the board, hopefully so they can get better intel and not just groveling out of him. How can you ensure a meritocracy, so that whoever rises to the top of the military or an agency is actually intelligent and not just a lacky? Fascist and/or right-wing models tend to leave existing private business in place- could those owners get any say in society as an organized interest group, to prevent the strongman from going off on destructive whims or something? Could a feudal structure (hierarchy of various nobles) actually work in a 21st century country? (Maybe Moldbug has written about this, I don't know)
Almost all of the countries that have produced economic 'miracles ' in the last half century or so have been authoritarian. South Korea, Singapore, China. It mostly seems to depend on lucking into the right autocrat (park, Lee, Deng).
Democracies in fact seem to be very poorly equipped to become rich quickly (or, in the longer scheme of things, stay rich). Mostly this appears to be because of classic collective action problems. Special interest groups and incentives of political actors force countries into sub optimal equilibriums for much much longer than is 'needed'. Mancur Olson covers this stuff very well in Rise and Decline.
> Are there any practical models for how to run a flexible, competent authoritarian government?
By definition, can an authoritarian government have real internal error correction?
Power corrupts, and all but the most trivial systems become corrupted over time. Especially a system of humans. From the largest government, to your town counsel, to the policeman on the street, or your local HOA - the power and control of resources attract people who will exploit them for personal gain. Separation of powers sets corruption in one branch of the government to collide with the different self interests from the other branches. How do you replicate that in a true authoritarian situation?
I'm not really that knowledgeable about political science or social structures, but I will talk out of my ass anyway. I would say that, in some sense, you're artificially *defining* authoritarianism to have the kind of problems you mention, or confusing orthogonal problems (== could happen in both "authoritarianisms" and non-"authoritarianisms") to be solely problems of authoritarianism.
For example, this :
>inability to deliver bad news up the command structure
has nothing to do with authoritarianism, it's purely a problem of the "narrow" sampling of underlings. If the supreme-leader has only one view into the external world, whatever his $AID says, there's a single point of failure to the su-le sensor suit. If/when $AID fails or misfires (accidentally or due to incentives), su-le ceases to sense the external world and starts acting based on whatever imaginery input $AID provides, often with disastrous consequences when su-le choices are fedback to the real world.
This can happen in "democracies", I recall something I read once about drone images of Iraq capturing nothing plausible about WMDs, but as the results travelled further and further up to Bush, every layer of interpretation added more and more danger till "{IRAQ: WMDS, CERTAINTY:95%}" somehow got to Bush. The reasons for the Iraq invasion is probably more complex than Bush's underlings bamboozling him, but this is just an example off the top of my head for why this problem is far from unique to authoritarianism.
Hell, consider a fictional ideal democracy where the su-le's view of the world is a function of 75% the people's opinions (perfectly transmitted, the su-le perfectly knows and experience what every single citizen thinks, and combines the least-conflicting 75% into a final decision). If $MEDIA_EMPIRE captures the information streams of 75% of the people, then the whole "democracy"'s view of the world is whatever $MEDIA_EMPIRE wants. Perhaps your thought process is something like "democracies distribute power so that this failure state is less plausible than in authoritarianisms", but consider that (1) the thought experiment is highly ideal, real democracies are actually surprisingly locally-similar to authoritarianisms at a myriad of levels, including the president (2) Capturing a large group of people's info streams is not automatically harder than capturing a smaller group's, for Mark Zuckerberg, hacking facebook's ~2billion users' view of the world is vastly easier than influencing China's Xi Jinping (3) Democracy expends massive amount of effort and complexity trying to distribute the power, which could have been better spent at distributing the power and info streams of the supreme-leader and their close circle instead, this is much much easier and less resource-consuming. (As per (1) and (2), democracy still often fails to distribute power despite all what it tries to do).
Then we have :
>lots of inefficient corruption
Also completely orthogonal to authoritarianism, corruption is people bypassing the established rules in favor of their own, ad-hoc, informal rules. It happens whenever people don't believe the established rules are fair AND they can get away with breaking them. Corruption is not possible if any of the 2 conditions are broken. If anything, authoritarianism should be *less* susceptible to corruption, as they're stereotypically better at indoctrination (helpful for convincing people rules are fair) and surveillance-punishment (helpful for convincing people they can't get away with violating rules).
>very suboptimal ways to transfer power when the strongman dies
I assume you're talking about violence ? Strictly speaking, violence is not really sub-optimal if whatever dumpster fire that happens never harms anyone but the (losing) candidates and doesn't extend too much. But really, this is begging the question: if you're assuming that violence always happens during transfers of power in authoritarianisms, you're really assuming authoritarianism has already failed, it's redundant to ask why it failed, you just assumed it did.
What prevents a losing US president from getting a cartonish red face when the ballot results are out and starting a civil war ? Stories. Very powerful stories. "Democracy","Constitution","The Will Of The People","The Founding Fathers","America Is Different", etc.. etc.. etc...Even if a losing candidate doesn't believe in ALL of those stories, they *know* that a lot of people do, enough to think very very carefully before violating them, and - till now at least - the cost-benefit always yields that it's not worth it.
What prevents a losing candidate in an authoritarianism from losing it and starting a civil war ? Just like Democracies, Stories. Just different stories. My own country is a 'soft' military dictatorship since 1952, nearly 3/4 of a century. Transfer of power is, with the except of 2 exceptions, always peaceful, probably less loud and less expensive than America's elections in fact. The only 2 exceptions ? One when we tried to make a democracy, and one when the embryonic democracy failed and a new dictator had to do some housekeeping normally not required when dictators pass control to each other.
So your fundamental assertion in the above statement really boils down to "democractic stories are easier to maintain and spread than authoritarian stories", which is not true in general, you can convince anyone of any story if you have good enough storytellers.
>unmeritocratic because loyalty is rewarded over competence
Come on come on COME ON. You're really leaning hard on the poor authoritarian bastards here. American universities favor people based on how much melanin their skin genes expressed, US presidents routinely hire Supreme Court judges from their own party. Those are not examples of "loyalty over competence" ?
Again, a completely independent and orthogonal failure mode that happens for it's own complex reasons.
>rigidity and inability to change over the decades as needed in a changing world.
Really ? China didn't drastically alter it's economic organization in response to a changing world ? Singapore and Saudi Arabia didn't build their own deadly symbiosis with technological globalized capitalism starting from *pre-industrial* economies? You sure you're not letting your moral views color your perceptions?
My own view, summed up in short sentences without justification :
- Authoritarianism is forced centralization.
- It's extremly simple to reason about and astonishingly efficient. It's the simplest solution to the problem of distributed consensus.
- There's no inherent fault or bottleneck in it at all, just bad implementors.
- Typical models of Authoritarianism is biased toward assigning incomptence because most major notable examples of authoritarianism since the last century were communist dictatorships, and the occasional Nazi or Fascist dictatorship, which were disasters for reasons completely unrelated to their Authoritarianism.
- Typical models of Democracy are biased toward assigning comptence because of Western democracies's temporary 20th century fluke, which again, is extremly confounded by everything from the rise of empirical sciences in the 15th-19th century (has nothing to do with democracy), colonialism (has nothing to do with democracy) and capitalism (has nothing to do with democracy).
Isn't this pretty much Singapore? I always had the sense that Singapore under Lee Kuan Yew was what Russia should have been if Putin was smart and not evil.
Pretty good call. They seem to be quasi-democratic in that they do have free elections, it's just that the deck is stacked hard in favor one of one political party, so in practice they're a one party state. But yes that's a good example- have a semblance of a democratic system, just restrict who can run
Nope. It's not possible for a small central authority (e.g. one man, or a triumvirate or something) to grok a society of millions well enough to run it efficiently. Might as well catapult 40 tons of aluminum, 10,000 pounds of gasoline, a whole lot of rivets and instruments and wiring harnesses into the sky, along with a pilot and mechanic, and ask them to assemble a working airplane and then fly it to its destination before the whole mess makes a big hole in the ground. The *only* way a society functions efficiently is when the bulk of the decision-making is devolved to a sufficiently low level that the knowledge required is so local and limited that it lies within the power of one or a few human brains to grasp.
But that rules out authoritarianism by definition.
The relative success of China and, as a commenter above notes, Singapore, should make you a lot less confident of this
It would if China hadn't started from a position of extraordinary underperformance. Reversion to the mean, eh? Let's talk again when the per-capita GDP of China ($11,000) gets in the neighborhood of Japan ($40,000), since Japan started as a field of rubble strewn with corpses in 1945.
And I agree that the smaller the operation, the better a chance that authoritarianism will work. It actually *does* work pretty well for a platoon, family, or very small business.
Singapore is an autocratic one party state with a GDP per capita of almost $60k (so 50% higher than Japan). It also has a population as large or larger than say any of the Nordic countries.
Every developing country that became developed in the last 100 years started out as an authoritarian government (South Korea, Taiwan, Singapore)- seems relevant. Even the moderate success stories (Thailand, Malaysia) are autocratic. I'm personally very pro-democratic government, but it's hard to miss these uncomfortable facts. I agree that democracies appear to be more efficient (for the most part, excluding Singapore) once you get to a certain level of development
The population of Singapore is only a smidge larger than the single suburban California county in which I live, which is maybe a dozen miles end to end and is run by a simple county council. I'm underwhelmed by a moderate success among what amounts to the population of a largish city. You might also have pointed out that the US Army has a "population" of 1.4 million, give or take, and is pretty much infinitely authoritarian and yet works well.
Given that the list of transitioning countries you mention is strongly weighted to Asia, and Asia has a long tradition of authoritarianisn going back to Marco Polo *and* a fair amount of its development appears to be merely catching up to the European and Anglosphere West, this again underwhelms. It feels more like the US Army example: the path of development for these nations was clearly marked out by those who were in front of them -- Europe and the US, for example -- so like the Army with its clear goals, it isn't super suprising that a disciplined focus on getting the job done, the concrete factories and highways and electricity mains laid, works well.
But this says nothing about how you do well if there *isn't* a clear path, if you're in the vanguard, say. How does a large and polymorphous country like the US, or China, or Russia, or Europe taken as a whole, remain at the forefront of prosperity? The history of transitions from devolved distributed decision making to centralized authority in situations like this -- no clear path, large and complex demographics -- is one of almost uniform failure. It's difficult to think of *any* success story.
china is about as well off as mexico. it's only a success relative to what it was under mao, which at one point literally ordered farmers to melt down their tools so they could export "steel". it doesn't take much to do better than that.
you could have anarcho-monarchism, where the King theoretically has absloute power, but uses his absloute pwoer to delegate most of his power to local leaders. He would still retain abslotue control of things like defense and foreign policy, where the situation is ammenable to understanding by a central authority, and retains the ability to medle in local affaris whenever he likes, he just, usually, doesn’t.
Anarcho-monarchism, or, as its known 5 minutes later, absolute monarchism. It's no good saying "the King won't..", you need incentives and threats.
Anarcho-monarchy includes the unstated premise that the king is Aragorn son of Arathorn. "Incentives and threats" generally include Anduril, Flame of the West.
Well if we're going to quote Tolkien, let's get it from the man himself (from a letter of 1943 to his son Christopher):
"My political opinions lean more and more to Anarchy (philosophically understood, meaning abolition of control not whiskered men with bombs) – or to 'unconstitutional' Monarchy. I would arrest anybody who uses the word State (in any sense other than the inanimate realm of England and its inhabitants, a thing that has neither power, rights nor mind); and after a chance of recantation, execute them if they remained obstinate! If we could get back to personal names, it would do a lot of good. Government is an abstract noun meaning the art and process of governing and it should be an offence to write it with a capital G or so as to refer to people. If people were in the habit of referring to 'King George's council, Winston and his gang', it would go a long way to clearing thought, and reducing the frightful landslide into Theyocracy. Anyway the proper study of Man is anything but Man; and the most improper job of any man, even saints (who at any rate were at least unwilling to take it on), is bossing other men. Not one in a million is fit for it, and least of all those who seek the opportunity. And at least it is done only to a small group of men who know who their master is. The mediævals were only too right in taking nolo episcopari as the best reason a man could give to others for making him a bishop. Give me a king whose chief interest in life is stamps, railways, or race-horses; and who has the power to sack his Vizier (or whatever you care to call him) if he does not like the cut of his trousers. And so on down the line. But, of course, the fatal weakness of all that – after all only the fatal weakness of all good natural things in a bad corrupt unnatural world – is that it works and has worked only when all the world is messing along in the same good old inefficient human way. The quarrelsome, conceited Greeks managed to pull it off against Xerxes; but the abominable chemists and engineers have put such a power into Xerxes' hands, and all ant-communities, that decent folk don't seem to have a chance. We are all trying to do the Alexander-touch – and, as history teaches, that orientalized Alexander and all his generals. The poor boob fancied (or liked people to fancy) he was the son of Dionysus, and died of drink. The Greece that was worth saving from Persia perished anyway; and became a kind of Vichy-Hellas, or Fighting-Hellas (which did not fight), talking about Hellenic honour and culture and thriving on the sale of the early equivalent of dirty postcards. But the special horror of the present world is that the whole damned thing is in one bag. There is nowhere to fly to. Even the unlucky little Samoyedes, I suspect, have tinned food and the village loudspeaker telling Stalin's bed-time stories about Democracy and the wicked Fascists who eat babies and steal sledge-dogs. There is only one bright spot and that is the growing habit of disgruntled men of dynamiting factories and power-stations; I hope that, encouraged now as 'patriotism', may remain a habit! But it won't do any good, if it is not universal."
Sure, and if we had a race of superbeings who would exercise that power wisely and with restraint, it would indeed work better. There's no question that *when* you have an unusually wise, restrained, and talented leader, it *does* work better than a decentralized liberal marketplace (of things and ideas). You cut out a lot of waste. This is what always tempts people toward the model. Sort of a Underpants Gnome theory of social success:
1. Set up a system where a wise philosopher king, way smarter and more disciplined than the average human, can bring order and efficiency to society.
2. ? Select king somehow ?
3. Profit!
This has always been my view as well. The schemes for some sort of kingly academy or selection committee to install an absolute monarchy would, at best, create a situation where the selection committee runs the nation instead of the king (at least, until an unexpectedly-independent king liquidates them). I have similar feelings about plans to put an AI in charge - you essentially end up handing off absolute power to the people who design the AI, in the hope that they don't program it to make them and their descendants quasi-monarchs.
> Are there any practical models for how to run a flexible, competent authoritarian government?
Authoritarian or just not democratic? because those are very different.
> inability to deliver bad news up the command structure, lots of inefficient corruption, very suboptimal ways to transfer power when the strongman dies, unmeritocratic because loyalty is rewarded over competence, and worst of all, rigidity and inability to change over the decades as needed in a changing world
These problems are endemic to democracies as well, except for transfer of power. That's the big problem you need to solve. Historically, the most sustainable systems are something like the dutch republic. Self selecting city councils* that elected executive officials and appointed representatives to provincial governments mostly from their own ranks. The provincial governments in turn selected provincial officials and representatives to a national government. These being early modern institutions, there was a tremendous degree of variability in terms of who got to be a member of what and every rule had numerous exceptions. The system was stable because power was highly decentralized and locally based, and everyone involved was selected from a narrow clique that had a common disinterest in being bossed around by anyone who wasn't a prince of orange.
* not exactly the right term, they were more like a roman senate, an assembly of notables. But again, tremendous variation existed.
You might want to consider taking a look at Violence and Social Orders: A Conceptual Framework for Interpreting Recorded Human History, by North, Wallis, and Weingast. Here's the description from Amazon:
All societies must deal with the possibility of violence, and they do so in different ways. This book integrates the problem of violence into a larger social science and historical framework, showing how economic and political behavior are closely linked. Most societies, which we call natural states, limit violence by political manipulation of the economy to create privileged interests. These privileges limit the use of violence by powerful individuals, but doing so hinders both economic and political development. In contrast, modern societies create open access to economic and political organizations, fostering political and economic competition. The book provides a framework for understanding the two types of social orders, why open access societies are both politically and economically more developed, and how some 25 countries have made the transition between the two types.
https://www.amazon.com/Violence-Social-Orders-Conceptual-Interpreting/dp/1107646995
It's long and very in-depth, but it'll give you a better idea how and why authoritarian regimes operate the way they do, and why the don't often go away.
You might consider looking into how the government of Iran operates. It's not quite an autocracy, but the supreme leader is appointed for life and is the head of state, etc.
Management consultants like the buzzphrase "Culture eats strategy for breakfast". It applies to countries as well as companies. If you have a bunch of intelligent, virtuous, "god and country"-style administrators, you can organize them however you want and it will probably work. If you have a bunch of kleptocratic psychopaths, you can organize them however you want and it will always be a shitshow. If I were a dictator and wanted to run a flexible, competent government I would not think to hard about organization, and I would worry a lot about hiring and firing the right people.
But then you get to the interplay of organization and culture and how they feedback on each other, and the thesis fails there and organization becomes important.
I'd be very interested in someone giving a deep explanation of how China's system works. I've been trying to read about it, but it all sounds very boring and formal and I don't feel like I understand either the logic of the pre-Xi system or how Xi managed to subvert it.
Have you read Yuen Yuen Ang's How China escaped the poverty trap? It's supposed to be the definitive account
Agreed. My vague impression is that it's a series of interlocking councils? A council to cover every possible governmental department or interest, then a series of governing councils in a hierarchy, like Russian nesting dolls.
Honestly, I could say the same thing you said about how the EU works? 'I've been trying to read about it, but it all sounds very boring and formal and I don't feel like I understand either the logic'
I am also interested in this. My 30k foot understanding is that at the high levels nothing actually works how it is ostensibly supposed to work, as was often accused of the USSR.
Another angle might be that some authoritarian governments are worse than others. What makes the difference between better and worse?
I find Bret Deveraux' distinction - Monarchy vs Tyranny - very enlightening : https://acoup.blog/2021/07/09/fireside-friday-july-9-2021/
Monarchy is defined by constitutional rule, Tyranny by extraconstitutional rule.
Because it works outside of the constitution, Tyranny relies heavily on cronyism and violence, and is much more instable.
Iran is a Monarchy, Russia is a Tyranny.
Makes me think about a joke Scott made recently about just letting Zvi Mowshowitz be benevolent dictator.
I’m looking for examples in science fiction literature of human-AI melding being portrayed in a way that is especially clever, interesting or convincing. Here are some examples of the kind of thing I mean by “melding”:
-In one W. Gibson novel, there was a being named I think Idoru who only existed online, and appeared there as a beautiful woman. She and a human character had fallen in love, and were trying to figure out how to make human/AI love work.
-In another Gibson novel, characters had moving tattoos of extinct animals implanted in their skin, animation accomplished by some kind of nanotech integrated into skin cells.
-In a Vonnegut novel, a robot with human-level intelligence dismantles itself in despair, I believe because it realizes it is a robot.
-In some random scifi I read long ago, vehicles traveling through interstellar space were guided by pilots who had the ability to experience the space & its various suns, planets, hazards, wormholes etc. as an earth-like landscape: From the pilot's point of view they were piloting a vehicle across mountains, forests, rivers, through storms, around volcanos etc — but all the terrestrial features somehow mapped one-to-one with features of interstellar space, and the pilot’s navigating of terrestrial features and dangers guided the ship through their interstellar equivalents.
Anyhow, it would be useful to hear of writers who are good at this, but even better would be to get some descriptions of AI/human connection or hybrids that impressed you.
Some of the Bolo stories explore this. Later-model Bolos (AI-controlled tanks) can mentally interface with their human commanders to form a gestalt entity with AI speed and human intuition. This also starts giving the Bolos some of humanity's less admirable traits...
In Vernor Vinge's "True Names" the iconic "warlocks" use a "Portal" VR device that enables them to interpret binary data as medieval/fantasy world. This allows them to manipulate the network. There is a further relevant aspect which would be a ruinous spoiler but is basically really cool, especially for the time.
Do uploaded consciousnesses count? If so check out Diaspora by Greg Egan, and The Bobiverse. Not sure they meet the bar for clever, but I enjoyed them a lot!
That last one sounds... kinda dumb, really. Why would it be better to view 3D space as 2D terrain?
I do have an example for you, though I wouldn't call it good ('cause it's mine):
https://randyswritings.wordpress.com/
A guy uploads himself into an AI, jumps out of the box, takes over a corporation with a bit of blackmail, and throws a copy into space. Hijinks ensue. At one point he/it realizes it is running out of storage space (due to a minor accident) and uses a comatose man as a additional storage, which kinda messes up them both, as well as the virtual reality.
It's unfinished, but the story of that AI ,at least, comes to a conclusion.
Thanks, I'll have a look at your story. The last example I gave wasn't as dumb as it sounds in my description. Pilots weren't viewing 3D space as 2D terrain, they were viewing it as 3-D terrain. They went into a sort of trance during their shifts in which they experienced themselves as driving a vehicle over a challenging earth landscape. Their brains had been modified in a way that allowed a feed of info about nearby interstellar space to generate a hallucinated earth landscape whose features and challenges corresponded one-to-one with equivalent features of ship's current space environment. And the pilot's actions when driving on simulated earth also corresponded with actions the ship took, but not in a simple way, not as a duplication of what the pilot did "on earth." For instance, if pilot came to a gulch, that would correspond to some area of interstellar space that could not be easily navigated. If pilot "on earth" chose to turn left and go around the gulf, ship would likewise do something to avoid the area -- but not necessarily by literally turning left, but by something that corresponded to turning left in some deeply valid mapping of vehicle-steering options in certain situations onto spaceship action options in corresponding situations.
But it's hard for me to conceptualize that as the pilot doing anything meaningful. The ship is telling him there is an obstacle, and he tells the ship he wants to go around it, in essence. But the ship isn't giving him accurate information about the obstacle, and he's not really giving the ship actionable advice.
And I have a hard time believing you could translate space to a similar Earth environment. In space you can approach an obstacle from any direction (The enemy gate is down!), whereas on earth, if there is a gorge, you can go right or left or maybe try to jump it somehow, but that's it. And the inertia of a spaceship is got to be radically different than an earthbound vehicle in an environment with gravity and friction.
It sounds like a "cool" idea, don't get me wrong, but it might as well be an AI telling the pilot a fantasy story ala AI Dungeon for all the correspondence the VR world would have with reality.
(I will of course back down entirely if John Schilling tells me it makes sense!)
Hmm. I can *maybe* make sense of it if almost all space travel is confined to a single plane (e.g. the ecliptic plane in the solar system) and you're using the third dimension in your sim-world to represent gravitational potential. The big issue in space travel is not obstacle avoidance - if you completely ignore the asteroid belt while plotting a course from Earth to Jupiter you probably won't even see your insurance rates rise. The big issues are that A: you're trying to hit a moving target, from a moving launch site and B: unless you've got ridiculous amounts of energy to use, you absolutely have to account for and exploit gravitational potential energy.
That's a hard enough problem even in two dimensions, and I don't *think* the proposed visualization hack is going to make it much easier but I could be wrong. Other problem is, really not everything is in the same plane, particularly if you're interested in planetary sites that aren't on the equator. And plane changes are difficult enough that you probably don't want to abstract the third spatial dimension out of your navigational VR just to maybe simplify the gravitational-potential part.
I *really* am not defending the tech in the sci-fi as plausible,. But I do want to give a better account of what the tech was, if only to explain why I enjoyed the book instead of sniggering and throwing it away. The ship was not flying around a solar system. Seems to me that if you were able to do that at a pace that made Earth-to-Pluto something like a Sunday-afternoon drive, it would have worked fine to just let the pilot see through the "windshield," and fly the thing like a jet using a combo of his own senses and relevant data appearing on displays nearby. But in my scifi the ship was traveling at far greater than light speed, through a universe full of dwarf stars, black holes, wormholes, dark matter -- in short, a potpourri of various weird entities scooped from the news of the era when the book was written. So the various hazards and opportunities came up often, as hazards would traveling over wild, difficult earth terrain, and many required creativity and judgment calls to navigate successfully. But the hazards to the space craft were things not visible to the pilot's naked eye. The pilot was trained (maybe with the help of a brain implant) to turn a feed of info about the ship's surroundings into isomorphic problems occurring on a hallucinated earth.
I wasn't claiming that anything like this could work now, or even that it could be made to work in a technologically-advanced future. Was just saying that I found the idea plausible enough for me to get on board with it imaginatively as I read the story, yet weird enough to give me an enjoyable shiver. It was in my sci-fi sweet spot, in other words. I suppose the idea in the background that my mind piggybacked the piloting system onto was what I know about isomorphic problems. For instance there's the 8 queens problem -- how do you put 8 queens on a chess board so that none is attacking any of the others. (There are 12 unique solutions.) There's a way to look for solutions using just numbers -- has something to do with prime numbers and factors, can't remember details right now. Anyway, I thought of the ship's computer as generating, moment-by-moment, earth-terrain problems that were isomorphic to ongoing interstellar navigation problems.
In my (very limited) experience, John Schilling is better at identifying ideas that do not make sense and stomping on them than at extracting and amplifying the value in partially-correct ideas.
I'm sorry if I'm being harsh. A good, thought provoking story can come from an implausible premise, and it's also entirely possible my intuition would be belied by math.
Oh, I didn't think you were being harsh, just maybe misunderstanding me a bit, thinking I was saying the piloting idea was awesome and valid, when really I just meant I got on board with it while reading a sci-fi and had a great ride. (John Shilling though, in my experience, is harsh, but my behavior sample is limited.)
I enjoyed Martha Well's "Murder bot" series. https://en.wikipedia.org/wiki/The_Murderbot_Diaries
That's just an AI as human though.
In Ancillary Justice by Ann Leckie, ships are guided by AIs and staffed mostly by human bodies fully controlled by AI. Viewpoint character is one such body that got separated from the ship.
In To the Stars, human politicians meld with ai politicians to form larger political blocks. And human generals became part of their flagship’s Ai in command mode.
https://m.fanfiction.net/s/7406866/1/
Your last one might be Melissa Scott's _Five Twelfths of Heaven_ Just a small chance.
Did you mean to reply to the top level comment?
Yes.
If you liked the ACX grants content, and want to do something like that yourself (on a much bigger scale), consider working for Open Philanthropy!
Our goal is to give as effectively as we can and share our findings openly so that anyone can build on our work. We plan to give more than $500 million in 2022.
Roles differ widely in the level/types of prior experience we want, and I'd guess that many ACX readers would be an excellent fit for one or more of them. Current openings:
Business Operations Lead - Manage the team responsible for making sure Open Philanthropy runs smoothly and efficiently day-to-day. We’re looking for applicants who resonate with Open Philanthropy’s mission and are excited to take ownership of building an excellent business operations function.
Program Officer, Global Health and Wellness Effective Altruism Movement Building - As the first hire and leader of this program, the Program Officer will be responsible for identifying grantees, making grants, and developing our movement-building strategy over time.
The Longtermist Effective Altruism Movement Building team works to increase the amount of attention and resources put towards problems that threaten the future of life and is hiring for four roles: Program Associates, a Projects and Operations Lead, a Program Operations Associate, and people to take on Special Projects. We're looking for candidates at varying degrees of seniority with a strong interest in effective altruism and longtermism.
https://www.openphilanthropy.org/get-involved/working-at-open-phil#open-positions
SOFTWARE ENGINEER WANTED
We are https://hookelabs.com, a family-owned company (15 years old, ~50 people but growing fast) based in Lawrence MA USA (30 min north of Boston/Cambridge). Our focus is research on autoimmune diseases (multiple sclerosis, colitis, arthritis, etc.), but we’re also branching out into development of scientific equipment.
You’d be the third regular SSC/ACX reader here (that I know about).
About 80% of the work we have now is in Python/NumPy, with another 15% in C (or Rust if you prefer), and 5% “other” including Google Apps Script. You don’t need to be able to do *all* of that.
The Python/NumPy work is on PCs and Raspberry Pi. The C/Rust work is on microcontrollers.
We have a lot of different projects, large and small. These include:
• Image analysis in Python/NumPy
• Embedded systems work on Raspberry Pi and microcontrollers
• Web-based UI development for scientific analytical equipment (mostly image related)
• Act as mentor to other sw developers
We could also use some help with IT stuff – we have a full-time IT person but he’s pretty overloaded. (We run Windows networks.)
This is a good position for a person who gets bored easily - you'll get to juggle projects, to some degree, to your taste, so long as they all move forward at some reasonable rate eventually (we don't have hard deadlines on most things, just stuff that needs to get done).
I don’t really expect one person to be able to do all this stuff, but the more you can do the better.
I’d prefer a full-time, on-site person, but we’ll also consider part-timers and people working from home (part of the time). Hours and most other things are very flexible. We offer all the usual benefits. We pay well and expect high performance.
To apply send a CV to <jobs (at) hookelabs.com>; put “Software Engineer” in the subject line.
Would you consider bringing in someone on a visa? As I posted in a later post, I grew up in Russia (but moved to the US long ago) and statistically probably know people who would be interested in having a tech job outside of Russia.
Yes, we'd consider it - if we knew how. We've never done that before and I suspect it's complicated and difficult. (We do have some people working on some kind of time-limited student visas, but I think in a couple of years they'll have to leave the US unless they get a green card.) We do help people get green cards to the extent we can (helping with legal fees and letters, etc.) but again we don't know much about the process or have any control over what US immigration does (I do wish it were otherwise!).
Another thing we'll consider is remote work - if a good person is in Russia (or elsewhere) and wants to work from there, we'll give it a try.
The comment about Saudi Arabia is interesting. I'm used to thinking of Saudi Arabia in its capacity as a petrostate / mideast US ally / repressive regime. Easy to forget that it's also the spiritual center of a major religion and that this has significant consequences for world events.
It's also funded various terrorist groups.
Wow. Thanks from me too. Somehow I had missed this.
This part really got my attention:
“But if this scene was to be believed, it turned out that terrorists didn’t need a learned debate about the will of God. They needed their spirits broken by corporate drudgery. They needed Dunder Mifflin.”
That was a fascinating read, thank you for posting!
Here's a metaculus question: will >=100 Russian troops, under Russian banner, enter Kiev by the end of 2022? https://www.metaculus.com/questions/9459/russian-troops-in-kiev-in-2022/
It's sitting at 99%. That generally means that "this event already happened". But, the question is not resolved or even closed. What's going on? One comment mentions a column entering Obolon [edit: fixed typo], but I have trouble finding images or video that might hint as to the total number of troops.
This question is ranked fairly low on Metaculus's list when you're casually browsing. My first guess was that people who voted on this 3 weeks ago have now just forgotten to update. What obvious fact am I missing?
That seems like a pretty silly question. It's quite possible that >= 100 German troops, under German banner, entered Moscow by the end of 1941 - you'd need to dig deep into the TO&E of Wehrmacht motorized reconnaissance elements, casualty reports and operational maps that may no longer exist, and be clear of your definition of "Moscow", but there's a fair chance that, yeah, for a few hours on the very outskirts of something that could be reasonably called Moscow, that happened.
It's roughly the equivalent of counting coup, good for a few cheap status points but changing nothing that matters. And, yeah, fog of war means resolving a blip that small is going to be tough. If Metaculus had existed in 1941, the Moscow version of that question would *still* be open.
So here is a map of the administrative boundaries of Kyiv: https://www.google.com/maps/place/Kyiv,+Ukraine,+02000/@50.4021368,30.2525102,10z/data=!3m1!4b1!4m5!3m4!1s0x40d4cf4ee15a4505:0x764931d2170146fe!8m2!3d50.4501!4d30.5234.
Fighting is currently in Irpin, which is right in front of the boundary. Note that between Irpin and actuall urban housing, there is a lot of empty space (edit: actually it is probably forest) within administrative city limits.
From the eastern side, there are "Heavy clashes reported near Brovary" (https://liveuamap.com/en/2022/14-march-heavy-clashes-reported-near-brovary), e.g. also just right in front of the another, even larger, empty space (edit: actually it is probably another forest) within city's administrative limits.
Obolon (not Obolev) is on the map as district within the city, but I do not think that it is proven whether Russian troops already entered it. Perhaps it was just incorrect reporting.
In the fine print:
>A repelled attack on Kyiv still would count, provided it could be ascertained to a high degree of confidence that at least 100 Russian troops were within city limits.
I don't know where exactly the city limits are, but IIRC Russia tried to move some columns straight into the city in the opening days. Probably one of them got close enough to meet the conditions.
Edit: I don't know why it wasn't resolved yet, given this - maybe it's a fog of war thing and they're not 100% that over 100 soldiers crossed the line?
Would anyone have recommendations for a psych in the south sydney area or in general if they're willing to do online? Betterhelp gives me like 90 a week estimates for a sub which would be fine if I had some sort of gurantee I'd get something out of this and that it wasn't a thing that I think will probably drag on for a long time.
Thinking of trying the UTS student clinic if anyone knows anything about that or has more recommendations like that.
Have a coupla ideas. What is UTS?
Local uni in my city who have a student clinic
Any book recommendations on either topic?
A history of Western spies (like, working for the USSR) during the Cold War. I'm much more interested by actually ideologically motivated spies like Kim Philby/the Cambridge Five or the Rosenbergs, than just normal boring non-ideological types who were paid off for info. I find the whole topic fascinating, and it sort of reflects how much the Cold War was really a clash of belief systems. I suspect in order to be good, the book will need to be written by a conservative- while I am not personally politically conservative, I doubt that someone on the left is going to be as rigorous on the topic.
And, a history of Japan's economic rise and fall in the 80s and 90s? I understand the basic outlines of the story (the MITI department running postwar industrial policy to great success, the Plaza Accords, the eventual commercial real estate crash, etc.), I'd just to love learn more, preferably from a source a few intellectual grades above 'airport bookstore business writing' level of thought
The strong version of the basic story about MITI industrial policy being a success is contested. Japan was the first to get right what many others since did as well - focus on comparative advantage, enable your firm ecosystem to export successfully and require firms to clear the market test. To this extent their industrial policy was definitely a success, and South Korea and China successfully followed that paradigm later. Anything Japan did over and above this, in terms of trying to pick winners among firms, particular directions of investment and micromanagement of firms, is not universally regarded as successful. Honda chairman for instance had said that MITI wanted to restrict them to two wheelers and Honda succeeded inspite of, not because of MITI.
The book "The Venova Files" is about Soviet Spies. What happened, was an old spook went to the Kremlin, and with a Kremlin guide and was researching restricted Kremlin archives for a book, when the Soviet Union fell apart. Since there was no one there to watch over him, he just copied all the archives he could until someone finally kicked him out of the Kremlin. The book basically exonerates Joe McCarthy's hunt for a commie hiding behind every cornflake ... yes, Joe McCarthy saw commies in his cornflakes, but there really were commies in his cornflakes.
The book consists of a bunch of short stories about every file recovered from the Kremlin archives.
Since nuclear war is (understandably) getting some discussion again, I thought this old comment from John Schilling in a thread about nuclear winter was really interesting: https://slatestarcodex.com/2016/04/11/ot47-openai/#comment-346878
The Wikipedia article on nuclear winter is actually pretty good as well. There's reason to be a bit skeptical of some of the nuclear winter estimates.
I've been thinking about nuclear deterrence a lot lately.
One of the hardest problems with MAD is, once nuclear weapons are in the air, what do you do? Your half of the world faces imminent destruction. Then you have an awful choice between reprisal -- ultimately ending humanity -- or submitting to your fate, knowing you saved the human race. (In this toy, oversimplified example.)
States who are definitely willing to irrationally fire second are the least likely to be hit by nuclear weapons. You might try to just fake it, but the adversaries sometimes steal your secrets, and will definitely read your secret plan to "not really fire but pretend like you will." So you should aspire to genuinely believe you are a state who will irrationally fire second, that's the best way to avert nuclear catastrophe.
You might set up two file cabinets worth of secret plans, to be opened only in these emergencies. The first box has the plans for retaliation. If you open the second box, it has the plans for staying your hand at the last moment to preserve humanity, since the window for affecting the outcome has already passed.
I don't have much to add that hasn't already been written. Except for the recommendation, in strongest possible terms, that we stop calling this "mutually assured destruction" and instead refer to this dilemma as the "nuke 'em paradox."
>One of the hardest problems with MAD is, once nuclear weapons are in the air, what do you do? Your half of the world faces imminent destruction. Then you have an awful choice between reprisal -- ultimately ending humanity -- or submitting to your fate, knowing you saved the human race. (In this toy, oversimplified example.)
No. Your choice is between shooting back, half your population dying, half their population dying, and someone else inheriting the Earth, or not shooting back, half your population dying, you getting conquered by the followup invasion, and the one who called your bluff inheriting the Earth.
Ord didn't put nuclear war as a 1/1000 probability X-risk because he thought there's only a 0.1% chance of nuclear war this century; he put it at 1/1000 because absent something fatally wrong with our models or an enormous nuclear buildup (to many times Cold War arsenals) we can't actually end humanity that way.
A full nuclear exchange probably wouldn't end humanity. And you could argue that a few million people in the neighboring states of your enemy dying from fallout is worth it to prevent a country willing to launch a nuclear first strike from being the dominant power in the world after having nuked america.
States don't psychoanalyze each other. It's futile and dangerous. You plan your deterrent based around the other side's capabilities, and they plan theirs around yours. So even if a state swore up and down they would never, ever, actually use nuclear weapons, nobody responsible would ever believe that.
The Soviet Union swore it would never use nuclear weapons first. Nobody believed that, indeed the entire complex early-warning and fast-response apparatus of NORAD/SAC was created *because* we didn't believe it -- you don't need to be on a hair-trigger alert if you are 100% confident the other guy will never shoot first.
Modern Russia swears in print it will never use nuclear weapons unless the very survival of the state is at stake, and certainly not just because the survival of Vladimir Putin, or of his pride, is at stake, and the whole reason the world is concerned today is because nobody believes that either. (Indeed, arguably Putin has been frustrated of late *because* it seemed too many people were believing Russia would only use nukes to save itself from ultimate extinction, and he has taken steps to restore a certain amount of scary ambiguity on the point.)
Finally, a full strategic nuclear exchange between the major powers possessing nukes would hardly spell the end of humanity. It wouldn't even spell the end of the respective countries. The truth is much less dramatic and more squalid: something like 20-50 million people would die, immediately or relatively soon, another unknown number of millions would perish in the drastic economic and transport breakdown that followed, and all the nations participating would be reduced to non-major power status for generations.
But India, Brazil, Chile, Indonesia, New Zealand, South Africa, and many other countries will be fully functional, albeit plunged into quite a serious economic shock by the immolation of the world's biggest markets for a period of years.
The "destruction" in "mutually-assured destruction" was never, by its students, meant to imply "every last human being dies" or even "civilization ends" but "my country is reduced to a pale shadow of it fomer self," something as prostrate and miserable as Germany in 1945 or Kharkiv now. This is bad enough that very few national interests can justify its risk, but it does not further imply one need ponder profoundly existential questions about ending all humanity.
A second point is that there is one scenario where a nuclear exchange equals doomsday, and that's widespread use of cobalt/salted bombs. Thankfully this seems to be one of those rare nuclear avenues that nobody has been mad enough to explore in too much detail.
The bigger problem for the world at present (at least, to my thinking) is the issue that setting off an economic/technological dark age by zapping the most advanced and economically productive parts of the world overnight will probably prevent us from ever rising back out past a roughly 18th-century level of technology overall. We simply don't have the easily-available energy sources or surface ores you'd need to re-start the industrial revolution.
In the event of nuclear war, then, we'd better hope that enough remains of the world's trade and industrial infrastructure to keep the remains of the world economy ticking over.
I think you're overestimating what would be lost. There isn't that much actual physical stuff that would be destroyed, or rendered unusable. The main destruction would be human lives, and the network of relationships and agreements that are the underpinning of modern complex highly-specalized life -- the kind of network of relationships and agreements that let me work in an incredibly specialized field in front of a computer, trusting that other people will dedicate themselves to keeping the electricity on, and delivering food to the store so I can buy it on a just-in-time-basis and not have to worry about planting my own potatoes and harvesting enough in the fall to get through the winter on my own. Flung back on our own resources, every man needing to do everything for himself, be an amateur everything, would reverse the enormous efficiency of specialization and cooperation, and make us much poorer. Which is the harm done.
But it's not part of the physical world, and there's nothing that stops it from being rebuilt. You just need new people, and you need to re-establish the networks and understandings. It would take time, for sure, but there's no irreplaceable physical thing that would be gone that prevented it.
I think that losing that knowledge and network of trade and relationships is precisely the problem - it's a recipe for a bronze age collapse scenario. Which would ordinarily be fine - societies decomposing into smaller, less integrated and less organisationally/technologically sophisticated units is often a relief for the poor bloody peasants slaving away to keep all of those specialists above them fed (for the specialists, of course, it's a tragedy). But in our case we simply can't afford to lose the ability to build complex machines, move stuff around the world, or dig hydrocarbons out of the deep.
Industrialisation (which was a contingent rather than a deterministic event in any case) relied on what amounts to a huge cache of untapped, easily-available energy. We don't have that anymore. So, at least until such time as energy generation and manufacturing can be rendered more local, we simply can't afford a general collapse and reversion to smaller, less integrated polities.
I was thinking about this problem recently. It's very reminiscent of Newcomb's paradox. It seems like if you want to "win", you should one-box, i.e. press the button. This is the general idea of functional design theory. If hackers demand a ransom in exchange for not releasing stolen data, and the ransom is not paid, they usually release the data. And if the ransom is paid, they usually don't release the data. Even through releasing or not releasing the data provides them no short-term benefit, they want to set the precedent that some hackers are good functional decision theorists who stay true to their word, so that future victims will then also evaluate their options in terms of FDT and pay the ransom.
But maybe the idea behind FDT kind of breaks down if your decision literally ends humanity? Sure, over the entire length of an iterated prisoner's dilemma where agents can think about the long-term consequences of defecting, they may just decide to cooperate, but if the very action of defecting terminates the iteration of the dilemma by eliminating the other agent, the assumptions probably fall apart.
That is why ideally this system would be fully automated - so that if you are attacked, you automatically retaliate, and your enemies know that you will automatically retaliate.
One interesting parallel to Newcomb's paradox is that it is an exploration of a sufficiently smart oracle which can determine whether you're likely to one-box or two-box (which seems plausible as long as you don't need absolute perfection and are satisfied with merely high certainty); and in a similar manner, the state system will try (and must try, according to game theory and MAD) to put in a lot of effort to detect whether you're the type of person who will "press the red button" or avoid retaliation, and in the latter case ensure that you get some other position where you won't need to make that choice. Like, even if only a minority of people are psychologically capable or willing to actually retaliate, I would assume that the nuclear command would intentionally have been selected to consist of those people.
See, Deaths's end (book 3 of the Three Body Trilogy), and how putting someone in charge who isn't willing to pull the trigger can actually lead to armageddon. (Yes, yes, I know, fictional evidence and all, but thankfully there isn't any real evidence to reason from). See also Bret Devereaux's most recent post over at acoup.
I don't think that your simplified model of MAD reflects reality because it's not really half of the world throwing nukes at the other half of the world, it's 25% of the world throwing nukes at each other and most of the world staring at it with horror in their faces. If we're talking about "ending humanity" and half of the nukes are already in the air, then IMHO 5000 vs 10000 nukes will not make a qualitative difference of ending humanity or not - the question is whether you nuke the culprits, but the harmful impact on e.g. Africa, South America and much of Asia will be qualitatively similar no matter if you push the button or not. Also, the choice does not need to be made right now - a large part of the deterrent of major nations is planned through "second strike" retaliation capability e.g. nuclear-armed submarines which may fire their missiles after the first strike has hit and the geopolitical consequences have been seen.
Like, the existence of the human race is not really at stake (see the post linked above considering nuclear winter as exaggerated, especially as we have literally 10 times less nukes than when cold war estimates of consequences were made), but the existence of *your* "world as you know it" is. Or, in some cases, if you surface on your sub and see that your homeland doesn't exist anymore, you may follow the orders and enact retaliation - again, not against the majority of humanity that's not going to be involved.
This is the part where I confess it was a toy abstraction, firstly to serve the thought experiment, and ultimately, for a bad pun.
Brett Devereaux recent writing on nuclear deterrence (https://acoup.blog/2022/03/11/collections-nuclear-deterrence-101/) was quite interesting.
I worry about the Endowment effect applied to the Russian invasion. The more Russia/Putin pour in to the war in blood and treasure, the more urgent it becomes for them to win and the more they will escalate. In turn, the more escalation, the greater the pressure for the US to get involved.
For what its worth, I think that chances of direct US military intervention on any given day (very low to start with) are declining with time.
On the contrary, in my understanding of the situation (elaborated here: https://astralcodexten.substack.com/p/hidden-open-thread-2145/comment/5509257?s=r), as the war drags on, it will be increasingly politically difficult to sustain even current level of US support for Ukraine.
Nah. We can sustain current levels of support forever without breaking a sweat. A lot of nations have come to grief underestimating the productive capacity of the United States. Worth remembering we make more weapons than the entire rest of the world combined, and even in the most peaceful years we make a strong effort to sell tons to foreigners to clear space in the warehouse for next years' model.
Besides, the opportunity to do battle testing of anti-armor, early warning, and portable SAM systems should not be wasted. This kind of stuff is gold for the gnomes back at Raytheon, and a bunch of the folks in the Pentagon basement are all watching eagerly, too. Seriously, when was the last time it was possible to field-test weapon systems and protocols designed to neutralize Russian assets against actual Russian assets operated by actual Russians? Highly useful.
In terms of producing weapons, sure.
I meant Western political support more generally; in the case of US I expect sanctions are the first thing that would be questioned (I expect more immediate problem in the EU with respect to refugees, but that is admittedly beyond the scope of OP)
Why? If it doesn't cost us much, why should we care if sanctions on Russia go on forever? We don't much care about the sanctions on Iran because it doesn't cost us anything noticeable.
I've yet to see a good argument that sanctions on Russia will have any seriously noticeable effect on the American pocketbook. In 2019 US imports from Russia were $22 billion, of which the largest categoriies other than oil (oil is about 60% of the total value) were precious metals at $2.2 billion and iron at $1.4 billion. Those are rounding errors for a $20,000 billion economy. The oil imports are a single-digit percentage, and are readily replaceable by domestic supplies, if the price oif oil is more than ~$60/bbl or so, where fracking breaks even.
Basically Russia just isn't very important to the US economy. We do more business with Chile and Vietnam than we do with Russia.
Main significance of US sanctions is not direct US-Russia trade (which is indeed tiny), but that they make it very difficult for Russia to export anything anywhere, just like it happened with Iran, since US has such dominant position in global financial and similar services.
And Russia is an important exporter of many commodities whose prices are determined on global markets, so when Russian exports are missing, global prices spike; and that is indeed happening and it will affect even US industry.
Ad oil specificaly, I think there will be reluctance to invest in expanding production at least in the "West", given that there is an expectation that governments will be seeking to shut it down as soon as possible due to climate targets.
None of this means that continuing sanctions would be somehow crippling to US economy; but it certainly means there is going to be pressure to make them softer; perhaps first informally, by not going too hard after workarounds etc.
I don't think so. You're right that some small countries somewhere or other might be priced out, so as usual folks in the Third World have a good reason to curse this rivalry, but it isn't going to change things here sufficiently to notice. I don't think commodity prices are volatile because of Russian sanctions, or fear of same, but because of inflation -- a prexisting problem -- and the fear of what *else* might happen. A much bigger war is a far bigger concern than any amount of sanctions to people who trade on the European or US markets. Mind you, I'm not suggesting people who trade European equities don't have real concerns about the impact of sanctions on European markets -- that's a whole different story, particularly for Germany.
Ha ha, no, there is zero reluctance on the part of investors to put their money into oil and gas production when the price of oil is this high. You'll notice XOM is at a multi-year high? They're awash in investment capital, and so are all the middle-sized guys. Certainly they are a little concerned with what climate-control nostrums might roll out of Washington -- this has been a constant source of mild concern since 1975 or so -- but (1) historically not much ever has, and (2) the Democrats are not suicidal enough to propose anything serious during a big run up in gasoline prices, and (3) the Democrats are going to be slaughtered at the polls in 6 months anyway.
So I don't buy the argument at all. Indeed, in my entire life, I've *never* seen serious domestic US pressure to ease any sanctions for practical economic reasons, and it would be a big surprise if it happened this time, for the first time.
Not much issue from refugee, at least for now. I don't expect to change shortly, there is absolutely zero traction of the traditionnal anti-immigration political discourse in this particular case. Ukrainian are welcomed in eastern Europe en masse, and in western Europe are not causing any kind of ideological resistance, only some practical details and logistic difficulties.
There are many reason for this: it's seen as temporary, people moving are of more or less similar culture, from immediate neighbors (seen from eastern europe at least) without previous stigma, this war fit in previous cold war narrative which is still very present in older and/or eastern segment of Europe population (exactly the one that is not too fond of immigration in general), and it's mostly old folks, women and children (while the previous waves where mostly young men).
It may be racist, but it's also the reality: the syrian-war immigration is so different on so many point that you should not expect the same reaction this time. This differentiated reaction is not only specific to Europe, you can see it in most immigration waves all across the world.
Word among sources I expect to be knowledgeable (e.g. https://twitter.com/kamilkazani/status/1503053699798769666 linked by Marginal Revolution today) is that the more relevant effect driving escalation is status / legitimacy. Putin started the war in part to cement his domestic political position and will likely face a serious uprising or coup attempt if he can't sell it as a clear victory.
Given that the stated casus belli was bullshit anyway though, can't he withdraw in exchange for bullshit concessions and call it a victory? "We successfully de-Nazified Ukraine, hooray"
If Russia withdraws today, there are going to be a hundred thousand Russians going home with the firsthand knowledge that Russia didn't de-anythingize Ukraine. And they're going to talk to their friends and family. Some lies really are too big.
If propaganda was that easy everyone would do it. :-)
The stated casus belli isn't really for consumption by the people who could cause trouble for Putin (oligarchs, generals, politicians, etc.). It's for Russians who only know about the war through state TV (and maybe also a little bit for particularly gullible Westerners). On this theory, Putin embarked on the war to shore up his support among the first group, precisely because it would be hard to fake an achievement like that. If he wins it signals competence, "grip on power", and the benefits of keeping him around (I'm sure many in Russia really would prefer a Russian-sphere Ukraine to a NATO-sphere one, other things equal). If he tries and fails... well, that would signal the opposite.
We have already seen Russian government backpedaling on some of their claims in national TV - e.g. now suddenly they say that change of Ukrainian government is not a requirement, and it never was one. For "internal consumption" Russia has no problem effectively issuing convincing statements like "Oceania has always been at war with Eastasia" even if this directly contradicts something they said earlier. They are good at it, they have all the tools, and it works.
So I'm fairly sure that if Putin decides to make some agreement, that the propaganda machine can sell to the Russian public that that this is a big victory; and that it's the exact thing that Russia initially wanted to achieve. And in peace talks, Ukraine and the west would likely cooperate by inserting some kinds of concessions that are practically insignificant but have symbolic PR value that help Putin save face. Those in Russia who believe the TV will buy it, and those who were always skeptical will just be happy that there is peace.
Is there a reason why the US couldn't base its troops out of Kuwait (and Qatar, Bahrain, & UAE) to control Hussein?
Looking back at things from twenty years later, the answer seems to be "no".
Back in 2002 (at least as I recall it; human memory going back two decades is quite fallible), there were lots of people (on OpEd pages, in opinion/policy magazines, and on these new-fangled things called "blogs") who expected that US forces remaining anywhere on the Arabian Peninsula would continue to cause the irritation that led to 9/11 whether or not they were inside Saudi Arabia's official borders.
This was not an opinion, mind, that was strongly associated with people either for or against the idea of invading Iraq. There was an op-ed of the era (at least, again, as I recall it) that could be summarized as "Iraq had nothing to do with 9/11, we don't need to invade Iraq, invading Iraq is stupid, to stop another 9/11 we just need to get our forces entirely off the Arabian Peninsula (specifically including out of that new airbase in Qatar)."
Is immortality good from a utilitarian pov because it means more years to enjoy more hedons or bad bc of the law of diminishing returns (things become boring, less urgent, less meaningful, etc. once I know I’ll live forever or live for much longer)?
Even if boredom reduces the utility gained from experiences, it seems unlikely that it would make the utility *negative,* so over the long run the positive experiences you have from immortality should outweigh any losses from not being driven by your mortality.
I think future people will be able to modify themselves so they don't get bored unless they want to, so it means more years to do whatever you want, explore things, and self-actualize.
I'm surprised that kind of mind/body modification doesn't come up more often in discussions of immortality. If you can suppress boredom, then you could just do a thing that makes you happy forever as long as entropy or ill intent don't get you.
> If you can suppress boredom, then you could just do a thing that makes you happy forever as long as entropy or ill intent don't get you.
This line of thought seems like it is isomorphic with you just modifying yourself to be perfectly happy all the time while not doing anything at all, which is not something I (the current, unmodified, version of me) would consider a good outcome.
A professor in college had a long discussion with us about a similar topic. If someone could be hooked up to a heroin machine (or pick a better drug for this) and just live in a constant state of...I guess we would call it euphoria or something like that, would that be a good life.
The strong consensus was no, but it was hard to separate the hidden variables (like the people who had to run the hospital, and the resources expended so that some could live like that).
What kind of immortality? Involuntary immortality seems like torture under pretty much any values system, while voluntary immortality seems much more likely to be good. My personal feeling is that voluntary immortality is likely to be substantially net positive from a utilitarian POV, and my guess is that most utilitarians would feel similarly. The "more years to enjoy more hedons" effect is clearly good, while it's not obvious to me whether the second-order effects are good or bad. Yes there's likely to be more boredom with immortality, but there's also less loss of loved ones and more time for a truly deep exploration of entire fields of study, kinds of art etc. Given that there's one clearly good effect and one ambiguous effect, I'd lean pretty strongly towards it being good.
Why, if you lived forever, you could have sex with every person whom ever existed. Or a meaningful relationship. We could each spend ten years married to Hitler, and find out what makes him tick. Imagine the possibilities. We would either find out more than we ever wanted to know about Hitler, or we'd find out more than we ever wanted to know about ourselves.
Infinity is a really long time.
Involuntary immortality is the most hideous fate imaginable. Once you perceive it clearly then your mind recoils in horror.
it's a fairly rudimentary observation, and not very philosophically sophisticated, but QALYs is a very relevant point here. Furthermore, I imagine suicide remains an option.
More straightforwardly, while I theoretically understand the question, boredom and lack of urgency hardly seem like a problem to me; it's a big weird world filled with lots of problems to solve, challenges to overcome, pleasures to indulge...
The answer is obvious once you spend an infinite amount of time thinking about it.
Scott, banal but important question about the Book Review competition.
I submitted my entry last week, and I got a one-line acknowledgement from Google Forms that my "reply had been received". That's it. Should I have expected anything more? Is it going to be possible to know at some stage if an entry has been received or not, and is being considered?
Sorry to bother you.
I definitely am not checking that Google Form and wouldn't have sent you further acknowlegement. It might be worth me doing that at some stage, but for now don't worry.
Has anyone encountered the writings of E. Fuller Torrey before?
I've read a couple of his papers, he seems great, any specific questions?
I've been reading his book, The Invisible Plague: The Rise of Mental Illness from 1750 to the Present.
The book goes over Torrey’s thesis, that the prevalence of insanity, which was once considerably less than one case per 1,000 total population, has risen beyond five cases in 1,000.
He goes through (in really impressive detail) the records on insanity in England, Ireland, Canada, and the United States over the last 250 years.
I didn’t know if anyone had read the book, and found any obvious refutations?
I've found it pretty convincing thus far. I’m considering doing a book review on it for the book review contest if I get some time.
With things like #1, it's hard not to think about it as the EA movement just hiring a marketing person of sorts. To see what I mean, go to the page and scroll down and read the rules. You will find they've already marked down what they think the most important issues are, the right moral system for approaching them (utilitarianism), and so on. They link from the rules to a site which even cranks down on what kind of writer they want, which is broadly someone who is basically Scott-like in most respects.
I don't actually have a problem with any of this; it seems like "let's pay for more dialogue, especially around this thing we think is great and great for humanity" doesn't seem like a bad thing to me. But you'd still be surprised to find, for instance, that they didn't end up giving it to someone "basically like" Applied divinity studies, perhaps more focused on one of the pet issues.
I could be completely wrong about that - lord knows I don't know the individuals involved. But I still look forward to more people getting into the writer-grant game; right now it's basically this promoting EA, and another one that promotes being a spread-sheet and economics enthusiasts, or nothing. I think whatever minor discomfort I get from stuff like this gets solved as the diversity of grant-payers grows.
I think those are mostly suggestions, aimed to point at the area they want to people who might not know what effective altruism is. My guess is ADS is in the category of things they would approve of. I know he has already gotten an Emergent Ventures grant and he may have gotten others.
ADS is sort of my mental model example for where this money ends up going. They are "too old" to get the grant by the rules of the contest, but I think someone who writes pretty similarly and is sort of explicitly-EA-aligned in the way they are sort of explicitely progress-studies aligned probably runs a pretty good chance of getting the nod.
I have no problem with this group's selection criteria at all, except to the extent anybody you'd expect to win Tyler's grant would probably run a pretty good chance of winning this one too, and they are pretty much the only grants that exist.
I'm not disapproving of EA people here; they are hiring a person to take the right positions as they see them, within a certain range of varience. They have their money where they mouth is proving their sincerity/commitment to those ideas in more ways that one. I'm just surprised there aren't more groups doing it.
Basically there's two groups who subsidize people talking about Utilitarianism, trying to ensure moderate amounts of paperclips and talking about economics are the way forward. I'm looking at this and going "OK, here's two groups who will pay for bog-standard bay area gray triber views to be promoted. Who is paying for literally anything else?" and there's 0 entries in that field.
A long time ago, a bunch of conservatives started paying to promote/encourage young conservative legal minds in a bunch of different ways, and that's had a noticable effect; basically everyone goes "yeah, the world is a lot different and better for conservatives because they invested in this way". EA here is paying for the same thing, but with writers instead of lawyers; with enough money and on a long enough timeline, they will end up controlling thinkpiece-SCOTUS, so to speak.
It's just weird to me that more groups aren't doing this, especially when it's so cheap (by "lots of tech money" standards) to do so.
> It's just weird to me that more groups aren't doing this, especially when it's so cheap (by "lots of tech money" standards) to do so.
Maybe their incentives are against planning this long term? Or maybe results are more mixed than the FedSoc example suggests. Dunno.
I think all my opinions on this instantly change if I find it it's been tried and failed in the last 20 years or so. Entirely possible someone's tried to manufacture thought-leaders in this way and failed, and EA just didn't get the memo.
I've been thinking about a consequence of increased healthspan. While some people sort of have the same years gain of experience over and over, we can assume that some will continue to learn and to become more skillful at the things they care about.
There's been very little about this in sf-- people would rather write dystopias. Also it's hard to imagine what even as little as an extra 50 years of learning and practice could do for people-- about as hard to write about as an accurate portrayal of increased intelligence.
There are a couple of possibilities. One is a limited longevity-- it turns out that people stall out at a couple of centuries. No matter when you were born, you have a chance to be among the top pianists in the world. You just have to work and wait. Same for something like running a major museum.
The plus side (though it's rough for ambitious younger people) is that the level of skill in the world (probably including skill at teaching) is going up.
The other possibility is that there isn't a limited lifespan any more. People are likely to just keep going on, though there are presumably issues with memory and possibly with boredom.
I assume people will develop new skills and hierarchies to be at the top of.
> There's been very little about this in sf-- people would rather write dystopias.
Egan's "Diaspora" maybe, or "Incandescence".
Recent sci-fi goes through incredible contortions just to _avoid_ lifespan extension and transhumanism in general, because the consequences to the setting outweigh whatever theme the novel/rpg/etc was supposed to be about.
This is the biggest elephant in the room. Seeing small children today and realizing there's an above-coinflip chance _they will never grow old_ blows my mind.
50+%? You're way more optimistic than I am. Unless you think there's a high near term Xrisk threat...
There probably is an xrisk or two, but I'm not counting it.
We have an ongoing biotech revolution and AI revolution that's just getting started. Both were futurist joke fields 20 years ago and now even normies know something's up.
Children born this year will be 60 in 2082. I estimate us getting through the technological (but perhaps not legal & mass production) hurdles of radical lifespan extension somewhere between now and 2100. Based on how AlphaFold caught me by surprise and casually solved an impossible problem, I'm ready to believe anything.
For comparison, 60 years ago _we didn't understand the genetic code_. Today we're at shit like https://www.proteinatlas.org/ or https://www.rcsb.org/ or http://www.informatics.jax.org/ . Extrapolate, and if anything I'd say the 50% figure is conservative.
I think the correct path is to change direction. I retired at 55, and returned to the University for a different career. Now I have a completely different career path, I see the whole universe as new, even though I'm 60.
I was thinking about the possibility of a society where there's also social pressure to change direction.
I'll mention Larry Niven, 'booster spice' and Known Universe series.
I see aging as our answer to a trillion cells and cancer... (Unconstrained division of one of those cells.) So I don't think there will be any big change. As I've gotten older (63) I'm seeing death as partly a good thing. Sure I want to see what's going to happen, but also I can get out of the way and leave the world to the next generation, (and more selfishly, my assets to my kids.)
Young people get a lot less cancer. Presumably some of this is less time to start cancers, but there's also having a better immune system.
"Rainbows End" by Vernor Vinge is an itneresting near future scifi take on this, which explores old people dealing with being unexpectedly healthy, but not having a clear role in society anymore.
I think a lot depends on whether or not they become "old" in various psychological/cognitive ways. I'm not talking about senility per se, but the ways in which I experience my own mind changing. And of course what an artificially long lived person experiences might be something unlike any life stage of a person experiencing normal aging. We don't know.
What I don't expect is for their life to be same as usual, only more of it. I.e. Heinlein's series of novels featuring extreme longevity is just plain wrong. (He has a character who's been in the same career "since first maturity", as she puts it, in explaining why she's the top of her profession at a mere 2-300 years of age.)
Lazarus Long got around quite a bit. “Time Enough for Love”
I think the most interesting fictional depiction I've seen of an extremely long-lived person is actually Doctor Who (which is ironic, since the world-building of Doctor Who is otherwise rather terrible).
1) The Doctor acts like an adult among children. He makes important decisions on behalf of the people around him without even asking their input, and sometimes against their objections. He regards anything that goes wrong as his fault for failing to control it. He views himself as much smarter than everyone else--not in a prideful way to bolster his ego, but simply as a fact.
2) The Doctor doesn't update his life philosophies for anyone or anything (even when the flaws are fairly obvious), because he already considered and rejected all the counter-arguments ages ago. The current crisis hardly weighs against his many lifetimes of experience.
3) The Doctor is a master of arcana. No matter what situation he finds himself in, he always has some esoteric bit of scientific or cultural knowledge he can exploit to create new options and get himself out of a bind.
4) The Doctor has already seen everything, so he spends his time showing the wonders of the universe to young people who have never seen them before, so he can experience their excitement vicariously.
Here's another, but perhaps less thoroughly worked out.
"I fear Benedict. He is the Master of Arms for Amber. Can you conceive of a millennium? A thousand years? Several of them? Can you understand a man who, for almost every day of a lifetime like that, has spent some time dwelling with weapons, tactics, strategy? All that there is of military science thunders in his head. He has often journeyed from shadow to shadow, witnessing variation after variation on the same battle, with but slightly altered circumstances, in order to test his theories of warfare. He has commanded armies so vast that you could watch them march by day after day and see no end to the columns. Although he is inconvenienced by the loss of his arm, I would not wish to fight with him either with weapons or barehanded. It is fortunate that he has no designs upon the throne, or he would be occupying it right now. If he were, I believe that I would give up at this moment and pay him homage. I fear Benedict. "
-- Nine Princes in Amber by Zelazny
see: Asimov’s spacer societies.
That's fictional evidence.
Everything on this topic will be.
There's also the possibility of looking at futures that don't show up in ficton.
Which also will be fiction.
One way to look at this is that the societies described in fiction are a very biased subsample out of all possible/plausible/likely hypothetical societies, namely, societies that are (a) interesting - not obviously permanently utopian or dystopian, but where a fiction story with interesting strife can happen; and (b) reasonably easily explainable to reader through a story.
If we would intentionally try to perform a systematic review of hypothetical futures for some practical purpose, I think that we would consider many options that a fiction writer would discard because of how it does/doesn't fit the needs of storytelling.
Oh dear, if no one has made them up yet, how are we going to look at them? I'm probably not understanding what you mean by fiction.
I mean, from the youth's perspective, it's pretty dystopian when there are no jobs because the current occupants live forever and only get more experienced as time goes on. The only recourse is to aggressively reproduce to create demand for your services.
Part of this depends on resource distribution, but the excellent quality of stuff produced by highly skilled ancients may take some of the edge off.
That's why I think massive increases in longevity will be a big driver for space colonization. Faced with either sharing power and influence with the younger folk, or giving them the means and opportunity to go off and form their own small ponds to be big fish in, the folks in charge will tacitly favor the latter.
In my story, the first space colonization effort is crewed by the future equivalent of second sons of nobility - people who are high status enough to go on such a prestigious venture, but who have no actual chances of advancement at home and thus are willing to go on what is essentially a suicide mission (even if it succeeds, it's a one way trip).
This is under the assumption of no FTL. There's no way the masses will ever come close to getting the resources required to attempt interstellar travel.
Well, in *my* imagined SF story society where aging is solved, society stagnates due to the "science advances one funeral at a time" effect and the ability of elites to entrench themselves in power indefinitely. Also, safetyism is turned up to 11 because people have more to lose.
Reminds me of "Icehenge" by Kim Stanley Robinson! (Or the Mars series, but I like Icehenge better-- more adventurous and much more concise)
TL;DR: lifespan is extended by an order of magnitude but people still "peak" in their 30's-50's, with interesting effects on academic norms and progress.
More days than not, I *already* feel like the safetyism is up at 11...
The funeral quote comes from Planck, and has been looked into:
https://www.lesswrong.com/posts/fsSoAMsntpsmrEC6a/does-blind-review-slow-down-science
That's the usual idea, but is it wise or healthy to only assume the worst?
One possibility is that science stagnates, but the arts keep improving.
Also, we don't know what age people stay at. Maybe science advances one funeral at a time (is that true?) because people get mentally rigid with age. If people are long-term 20 year olds, maybe things will be different.
> but the arts keep improving
What does it mean for the arts to "improve"? "Art" is defined by trends and fashion, so unlike science it doesn't improve or deteriorate, it can only change.
The arts produce more emotional intensity and/or more satisfying long term effects.
And possibly an increase of complexity, just for the fun of it. Movies are more complex in general than stage plays.
Science stagnates because successful scientists get promoted, and defend their works against new ideas.
My theory is that people get mentally rigid due to *experience*. Biology probably doesn't help, but I doubt it's the only cause.
Also, people are just less motivated when the future seems so long. This is a world where people think nothing of slaving away for 50 years on a PHD, and even then they won't be able to get a job until someone higher up kicks it, which almost never happens.
There are certainly people who are driven by the feeling of having limited time, but I'm not sure how common that is.
Presumably, Vladimir Putin can't cancel Leo Tolstoy: https://en.wikipedia.org/wiki/How_Much_Land_Does_a_Man_Need%3F
The West might though, given present trends.
нет, я так не думаю
There have been cancellations of Russian authors, composers, conductors and dancers throughout Europe, post the breakout of war.
Well, in some cases that is probably just silly. Leo Tolstoy is a gift to the world.
In my anecdotal experience in college, women seem to give themselves a much larger work load then men. This being done usually through having more majors or minors. I frequently would hear from my female friends how busy they were, them majoring in everything from theater to business to chemistry. I would hardly ever hear from my male friends how busy they were and they usually didn't seem to be under as much work load in those same areas.
I've also heard (but have not dug too deeply into) that women make up the majority of graduate degree and PHD earners now as well. Extended schooling brings much more work. I just wonder why women seem to give themselves more work to do in college where I see men not have to do as much. The sex differences could be easily explained away as women are more vocal about the amount of work, but that still doesn't explain why women take on such a heavy workload in the first place. I'm wondering if any of you have noticed the same thing.
> Extended schooling brings much more work
As a PhD I disagree; extended schooling is the easy option compared to going out and getting a real job.
A PhD (unless done for immigration reasons) is an incredibly expensive luxury good which you purchase at a time in your life when you can least afford it. You pay hundreds of thousands of dollars in foregone earnings for the right to call yourself "doctor" and spend your twenties obsessing over your favourite subject. It's something that you're much more likely to do if you have a "fallback position" of marrying a rich dude than if you are expected to be your household's primary breadwinner. And it's a double whammy for men because mens' attractiveness is highly dependent on their financial position whereas womens' is not. (I had a certain amount of family money so spending my twenties earning fuck-all wasn't a big problem for me, but most men are not so lucky.)
tl;dr female privilege
I think this depends very much on what kind of a Ph.D. you're talking about. In engineering, at least the aerospace variety, I'm pretty sure the sweet spot is an MS, but I think the marginal difference between a Ph.D. on the one hand and a two-year head start on the other is pretty small.
In the hard sciences, a Ph.D. is a prerequisite to most of the really good jobs, and those jobs are really good. Telling someone they shouldn't be a biochemist because biochemists have to spend three more years in school than engineers is kind of missing the point - at that stage, you're trading modest differences in dollar-maximization against getting paid to do what you actually want to do.
A Ph.D. in Music Theory, is either a lottery ticket or a luxury good.
This is all true. I note that aerospace engineering PhD students are mostly men.
I don't really think your take is correct. Yes, you're forgoing some earnings in the short term, but lifetime earnings for people with doctorates seem noticeably higher on-average than those for people with a bachelors or masters degree. https://www2.ed.gov/policy/highered/reg/hearulemaking/2011/collegepayoff.pdf (PDF warning). Moreover, I think the characterization of a PHD as "obsessing over your favorite subject" understates the extent to which getting a PHD is hard work, and the extent to which PHD students are more likely to have depression than working professionals https://www.nature.com/articles/s41599-021-00983-8#Sec16.
I think it's possible your experience is just atypical? Wikipedia says that double majors are only very slightly more popular among women than they are among men. https://en.wikipedia.org/wiki/Double_majors_in_the_United_States#Class_and_gender
Is this necessarily about double majors, or would it include taking more courses outside of one's major?
No experience in the area, but an explanation could be the expectation of discrimination that extra credentials could compensate for.
I sorta suspected the same. Higher levels of neuroticism among women, greater fear of failure, more time spent studying.
Once upon a time, bean would have posted something like this as a series of comments on slatestarcodex, but now that navalgazing is all grown up I feel like it's worth pointing out the existence of https://www.navalgazing.net/Early-Lessons-from-the-War-in-Ukraine
The first part of this BBC podcast covers some of the same issues: https://www.bbc.co.uk/sounds/play/m0015f1k . Quote: "The part of the Russian army that is modern isn't large, and the part that is large isn't modern". It suggests that troops on exercises often report equipment as working when it isn't, to avoid hassle, and that some equipment is sold on the black market. So the less professional part of the army is badly equipped and that isn't apparent to the senior commanders who are told everthing is working fine.
Thanks for linking that. Very informative as usual for bean.
I've never really understood what the EA movement is. Can someone explain?
It's an intellectual and charitable movement that tries to find the ways to do the most good. So where most people when donating money think about what sorts of causes are closest to them or something like that, an EA tries to think about where their dollar might be put to the best use to promote overall wellbeing. Everything else is downstream from that way of thinking.
Right. I see. I've a hypothesis that most charity and altruism is affective rather than effective. That's not a criticism of this movement especially and perhaps I'm being too cynical after 20 years studying human nature.
https://www.effectivealtruism.org/articles/introduction-to-effective-altruism
one dollar for malaria nets helps way more than one dollar for the Make A Wish foundation
You can't say that in general. It's entirely dependnent on your utility function. You need to qualify it with the appropriate standard by which you judge effectiveness. I personally don't believe saving as many people's lives in Africa as possible is optimal for promiting the long-term wellbeing of humanity. Spending the money on contraceptives for Africa would likely be much more effective by my standards.
Under vast majority of sane and nonevil utility functions "one dollar for malaria nets helps way more than one dollar for the Make A Wish foundation" is true.
Would you consider a utility function that includes a high "discount ratio" depending on your "relationship distance" to the other person (e.g. caring about your children much more than your neighbors, and caring about your neighbors much more than someone you'll never encounter) as insane or evil?
Honestly, even if you have an inverse distance squared law for utility, you would have to discount Africans down to the level of, like, slime mold, before mosquito nets didn't outweigh random crap
That is why the AMF is so highly ranked, because charity is like, the least efficient market possible, you can buy saved lives for almost nothing compared to far more popular 'help people' charities like cancer research or disaster relief
But yes, we would tend to think of those preferences as being typical human nonsense which we should all strive to overcome. I don't know if I would blame you for failing to do so, it is *really hard* to make yourself care about some random african kid as much a you care about your own children. But I think I would blame you if you failed to recognize that you ought to, even if you can't.
The way out of feeling horribly guilty is, the more we pour money into actually effective charity kind of the more efficient the market will get, until it is impossible to save lives for a few dollars because everybody already has mosquito nets. At that point, you'll finally be allowed to spend $100 on new clothes without worrying that you should have saved a life instead, because you simply won't be able to
Well, the utility function is not up for grabs, it is what it is, there is no ought - if according to the current utility function A is better than B, and according to some other utility function B is better than A, than I should *not* strive to adopt that other utility function because it will cause B to happen and (according to the current one) A is better.
To use your example, if I (or my neighbor) would literally care about some random african kid as much as I care about my own children, and act accordingly e.g. with respect to resource allocation, I would consider that I (or my neighbor doing the same) am a bad parent that is grossly failing in my parental duties of care (which require, at a bare minimum, that I guarantee my kids orders of magnitude more resources than the African median - if my kids got that level of resources, that would be literally criminal neglect that would and should result in my community taking away my kids to raise them better) that act would be grossly immoral. Like, it's great to help others, and saving a life with some of your extra money is a great use of that money, but if you're helping others so much that your kids get just as small share as a random African kids, that's far too much and not okay anymore, that's taking away from your kids what they deserve. In essence, helping others is great with your "free spending" resources, but you're not morally entitled to unilaterally choose to give away everything to outsiders because your family and community do deserve more support from you - you are a part of those communities, you have mutual obligations that you don't have for the rest of the world. Like, if my neighbor had a utility function like you describe, they would be a good person but a so-so neighbor. I would want to be able to rely on them that they will, as a neighbor, favor my interests over a random person - for example, if a violent conflict arrives (as the recent Ukraine war), can I rely on them that they will take my side? If they would explicitly try to stay neutral and say that both sides deserve equal care, I would consider that behavior as immoral, shirking their moral duty to the community.
Far from feeling horribly guilty and striving to overcome that, I consider that it is my ethical and moral duty to ensure support for my dependents, and it is my ethical and moral duty to my family and community to continue caring about them at some rate that's higher than "equally divided 1-in-8-billion share of caring", and if I adopted such a globally-equal utility function then that would be essentially defection in a cooperative game, an immoral, antisocial activity that should rightly result in reprisal and shunning from my community and extended family - in essence, exclusion from the tribe because I choose to abandon my tribe(s) in favor of others. The utility function is not up for grabs - for most utility functions, changing your utility function has very poor utility.
A while back, I started working on a Wikipedia page covering David Benatar's "The Human Predicament," a work arguing strongly against bringing more humans into existence. You can read the draft here (I'm around halfway through the synopsis section so far): https://en.wikipedia.org/wiki/Draft:The_Human_Predicament
Would publishing such an article likely be overall net helpful or harmful?
EDIT: Assume this will be published on a private blog, with the actual Wikipedia article either significantly pared down from the current draft, or significantly more external sources added.
I'm not sure if Yitz is eligible to submit a review of ''The Human Predicament'' to Scott's book review contest now, but this is the type of book I would love to read about there.
Good idea, but shuush... the book review contest is anonymous..
(some Wikipedia banter)
Apparently it's impossible to "collapse" a thread as the author of a comment, but I can work magic with camera edits.
I agree that there's too much of the primary source. However, if it passes on notability, I would welcome it as a wikipedia page - it's approximately 7 times better than the average Wiki page and it's not even finished yet. I like it.
Thanks! Alex Power is correct above however, in the sense that anything that I can't pin down as being supported by Wikipedia policy is likely to get removed, and get removed fast. The "Summary" section is the exception here, as I believe it's explicitly stated that you can provide a short summery of a book without citing sources. A longer summery is more tricky, and if I want the info to stick on-wiki, it will need to be backed up by others writing about the book.
Anyway, all of this is a bit of a distraction from my original intention in mentioning this here, which is to question if providing more visibility to (and more steelman arguments for) Benatar's philosophy is likely to be helpful or harmful.
Seven times longer, perhaps: there are many many articles on villages, 19th century sportsmen, etc. that are quite short.
If you cut the "content" and "summary" section it might pass WP:AFC once somebody fills in the "[add section about how this book expands on that]" placeholder.
I'll likely move it there, as I started writing this before I had a good grasp of Wikipedia policy, and re-reading it you're correct, it's sourced too heavily on itself.
I'm reading the new book by Thomas Insel, the former head of the NIMH, and will review it. Hope Scott does the same.
Don’t know if you’ve finished this yet, Freddie but in chapter 10 - ‘Innovation’ - there is a reference to the NLP lexical analysis of Iris Murdoch’s “Jackson’s Dilemma”. I know some people thought she was showing cognitive decline with that one, but it is still my favorite IM novel.
I’ve always had a soft spot for a good joke. From the Thomas Insel - who is not involuntarily selibate, BTW - book that Freddie is taking about:
‘There is an old joke about the impact of psychiatric treatments. A cardiologist and a psychiatrist are kidnapped. The captors explain that they will shoot one of the victims and release the one who has done the most for humanity. The cardiologist explains that his field has developed many new drugs and procedures, preventing millions of heart attacks and saving millions of lives. “And you?” the kidnappers ask the psychiatrist. “Well, the thing is,” he begins, “the brain is really complicated. It’s the most complicated organ in the body.” The cardiologist interrupts, “Oh no, I can’t listen to this again. Just shoot me now.”’
I took a look at Insel on amazon, and there's someone selling summaries of his books. Interesting niche.
I have a reading list a mile long already, but I'm looking forward to your review.
The question from the audience that he related in his February Atlantic article has me interested:
‘ When the Q&A period began, he jumped to the microphone. “You really don’t get it,” he said. “My 23-year-old son has schizophrenia. He has been hospitalized five times, made three suicide attempts, and now he is homeless. Our house is on fire and you are talking about the chemistry of the paint.” As I stood there somewhat dumbstruck, he asked, “What are you doing to put out this fire?”’
I might have a look at that book too.
That looks like it could be quite interesting, looking forward to the review.
Same.
> The effective altruist movement is offering $100,000 prizes to each of the top five new EA-aligned blogs this year. If you were thinking of writing a blog that touches on EA topics (x-risk, progress, global development, moral philosophy, AI, etc) now’s a pretty good time.
Sorry, is this for pre-existing blogs (started in the last twelve months) or new blogs that will be judged at some later date this year? The rules are unclear. They hope to award in 2022. They haven't said, for example, if I started a blog today if I'd be ineligible because they're judging on the trailing twelve months. In fact, without your comment I'd assume they were only looking for pre-existing blogs.
This honestly strikes me as incredibly weird. You have this thing where you want to encourage writing of a certain type to promote (X) where X is a movement, or a moral system, or whatever. And for your pool you have every blogger that exists, which presumably includes a bunch of people who are doing pretty good work but languishing in obscurity.
But the priority goes to 3-monthers, which seems counterproductive to me. You have a smaller body of work to judge them by, you have no idea how much stick-to-it they have, etc. It just seems like a bad call. I think if I had to steelman it, it would be something like:
1. A lot of blogs start and fail in X months *because* they get no encouragement; it's a lot harder to imagine a blog not making it to a year if they have a 100k obligation.
2. Anybody who has been doing good work for 14 months without reward is probably likely to keep doing it even if we don't pay them - we might not get *quite as much* as if we paid those guys but we can get most of what we would have got, for free.
3. Saying "best new blog" is a lot more compelling than having to explain that some people haven't had "big exposure event" luck in the first 12 months and aren't doing so well.
Which is fine, but you feel bad for people who are doing good work but haven't had great luck in the first 15 months or whatever. I've had a lot of that luck and have a lot of unearned success as a result, and there but for the grace of god go I.
Where does it say priority goes to blogs started in the last three months? I'm not seeing that on the site.
This was just bad writing on my part; I was trying to compare/contrast 3 months vs. >12.
Thank god, I started my blog 4 months ago.
Yours was one of the ones I thought of as "people who I'd basically expect to get a nod in this contest", for what it's worth.
inshallah
I think their goal is to encourage people to start new blogs, and the "three months" thing is only in there so people who started one day before their contest don't feel cheated.
As a longer-term EA blogger, I feel like the movement has been very nice to me and made it clear I can access their money if I need it. I hope other longer-run EAish bloggers have had similar experiences.
The "to get more people to make completely new blogs" motivation make a lot of sense to me. My brain was de-emphasizing that quite a bit for whatever reason, which is weird on my part. That part is a lot less weird to me than "9 month old blog make sense for this, 13 month old blog doesn't".
But your experience/impression they would be helpful for an older blog sortof negates/superceeds my "vague impressions of weirdness" anyhow, so my comment is probably mooted on all fronts.
The rules say "Qualifying blogs will generally need to be new or started within the last 12 months, though exceptions could be made for special cases (like a long inactive blog). Please reach out if you have questions."
I also wonder what their policy would be if you're writing on something like LessWrong, which is sort of half blog platform, half social media site
I'll reach out. I guess it's just if they said something like, "Judging will start in November" vs "Judging will start tomorrow" those are very different competitions! The former is obviously trying to get people to start new blogs. The latter is more like a "best newcomer" award.
ETA: Even the last twelve months comment doesn't say twelve months from what. Today?
Why haven't you started a blog yet, dammit?
Life keeps getting in the way! Stupid life. Hopefully these two contests will finally kick my ass into gear.
“ Steven Ehrbar gives a theory I’d never heard before for why US invaded Iraq: to unpin US garrisons in Saudi Arabia.” It’s hard to believe that the Bush Administration was that competent. If they had been, would they have ignored that taking Hussein down pretty much guaranteed Iranian hegemony over Iraq?
I remember much Neocon hope in those days that the people of Iran would rise up and demand freedom. Arab Spring and all a decade early. People really thought liberal democracy would just work in Iraq and Afghanistan and all the neighboring countries would want to follow suit.
Which demonstrates yet again their incompetence.
It’s like you either die as a failed counterculture or succeed in capturing the zeitgeist and then overdose on your own kool-aid.
Why not? Iraq as a satellite of Iran is preferable to one run by a Baathist loony.
Maybe. But that’s not what the Bushies wanted or intended, which goes to thei competence.
Maybe, maybe not. If they decided that if their top-line goal wasn't achievable, a second-best outcome was a satellite of Iran, and on that basis decided to go ahead, then they called it exactly right.
To quote the New Yorker from 2003: “One senior British official dryly told Newsweek before the invasion, “Everyone wants to go to Baghdad. Real men want to go to Tehran.”” The idea of Iraq as a satellite of Iran was not what they wanted. They wanted to take the regime in Iran down.
Well, if you'd quoted George Bush himself, or one of his close advisors, as to what the Bush Administration wanted or thought that would have a chance of being persuasive. As it is, from my point of view your second sentence is a non sequitur relative to the first.
Look - it's fine if you don't want to believe what I believe. I remember hearing that line a lot in American media at the time, though. It was pretty clear that Bush and Co. viewed both Iraq and Iran as unfinished business. If you want to believe that Bush was actually pro-Iran, be my guest.
I think it's silly to postulate a "real reason" why the US invaded Iraq.
The invasion of Iraq required building a consensus among many different organs of government, a substantial slice of the politicians, and a substantial slice of the population. Different people were convinced by different arguments.
Back in college, a year or two after we invaded, I did a geology project where I stumbled across a line in some book about oil exploration in the 1950s. Back then we apparently didn’t even bother with wells under a certain size or quality. But in Iraq they were all mapped out, which is the expensive part of the job, and as of the 1990s they hadn’t been exploited. I tried figuring out once or twice since then if that oil was still there, and if it’s not, who took it. But that’s not the easiest question to research and I never had time for it.
Why is it hard to believe that they were competent? I think it's well-established that many of Dubya's malapropisms were campaign tactics, and the criticism of Cheney/Rumsfeld/Wolfowitz etc. was always that they were evil, not that they were stupid.
Cochran theorizes that Cheney's competence went down over time due to medical issues.
The dude did completely renege on his completely correct prediction just 9 years later:
"In an April 15, 1994 interview with C-SPAN, Cheney was asked if the U.S.-led Coalition forces should have moved into Baghdad. Cheney replied that occupying and attempting to take over the country would have been a "bad idea" and would have led to a "quagmire", explaining that:
Because if we'd gone to Baghdad we would have been all alone. There wouldn't have been anybody else with us. There would have been a U.S. occupation of Iraq. None of the Arab forces that were willing to fight with us in Kuwait were willing to invade Iraq. Once you got to Iraq and took it over, took down Saddam Hussein's government, then what are you going to put in its place? That's a very volatile part of the world, and if you take down the central government of Iraq, you could very easily end up seeing pieces of Iraq fly off: part of it, the Syrians would like to have to the west, part of it – eastern Iraq – the Iranians would like to claim, they fought over it for eight years. In the north you've got the Kurds, and if the Kurds spin loose and join with the Kurds in Turkey, then you threaten the territorial integrity of Turkey. It's a quagmire if you go that far and try to take over Iraq. The other thing was casualties. Everyone was impressed with the fact we were able to do our job with as few casualties as we had. But for the 146 Americans killed in action, and for their families – it wasn't a cheap war. And the question for the president, in terms of whether or not we went on to Baghdad, took additional casualties in an effort to get Saddam Hussein, was how many additional dead Americans is Saddam worth? Our judgment was, not very many, and I think we got it right."
I had wondered if the simpler explanation is “in an era of accelerating technology growth, there is no way to make sure your military doesn’t fall behind unless you are constantly fighting a war.”
I have often wondered if this is true, and if so, how many people in the US government think about it or plan around it.
Even beyond the battlefield testing and creation of veterans, there are a whole lot of social unity and governmental programs than depend on having a constant stream of veterans of multiple ages. WWII, Korea, and Vietnam provided huge numbers for VFWs and the VA hospitals. Beyond actual organizations, there are also things like veterans parades and other events. School aged children are asked to find a veteran to thank on Memorial Day and Veterans Day.
We had pretty small numbers from the mid 70s until the first Iraq War. Those numbers didn't go back up much until Iraq II and Afghanistan. Now there are lots of veterans for those organizations again, around the time that WWII vets were dying off in large groups, and Vietnam vets were rapidly aging.
War has been a constant around the world; I believe that all the years of recorded history where there hasn't been a war on SOMEWHERE sum to around a century.
I don't think a conspiracy to feed the VA and VFW is needed to explain US military adventurism; it's an even more far-fetched assertion than the usual hobby-horses of "War only exists in the modern era to trick the populace into giving up rights" and "War only exists in the modern era to kill off single men".
Or "war only exists because of the arms industry".
Peace probably requires as much explanation as war.
There's also a theory that there's no substitute for veterans.
SSC/ACX got a mention on the Rebel Wisdom interview with Samo Burja. It’s worth a listen. https://youtu.be/Mu19_rlwHgY
I forget the time stamp, but it’s Samo’s response to a question about Scott’s prediction grade.
https://player.fm/series/conversations-with-coleman-members-exclusive/the-failures-of-the-ny-times-with-ashley-rindsberg-s3-ep7
Coleman Hughes interviews Ashley Rindberg (specialist in what's wrong with the New York Times) and gives an enthusiastic mention of Scott's Ivermectin article at 50:00
It's a generally a fascinating interview, and finishes with a recommendation of researching the things you care about rather than using media that's pushed toward you.
"What's wrong with the New York Times" is vague and mild, and one might almost think it has something to do with the not-quite-recent difficulty.
No. Now about siding with Hitler, covering up the Holodomor (both because of convenience for business interests), being wrong about Saddam's weapons of mass production, and the amazingly sloppy 1619 project?
The hypothesis is that the NYT will do whatever it takes for money, access, and status.
I still think their science reporting is generally good, but let me know.
PSA there was a post on March 8 that didn't trigger and email notification.
Was that Ukraine or Zulresso?
Ukraine
That makes sense, I was wondering why it got so little interaction. I don't know what happened but it seems to not be happening now; let me know if it happens again.
Kenny said “FYI – I saw this post in my feed reader but didn't receive an email for it. I contacted Substack and they replied that you didn't designate the post to send an email (or something like that).” @ https://astralcodexten.substack.com/p/ukraine-thoughts-and-links/comment/5449794
Probably I accidentally clicked something I didn't mean to.
That shouldn't go on the mistakes page either. Just an Oops!
Solenoid Entity said the same thing.
He got it from me ;)
The motte is the mound.
"Motte" is from the Old French word meaning "mound". There is a mound of earth at the back of the bailey, and on that mound is where the final defensive structure is built. Rhyme helps with memory, so I wrote you a ditty:
The motte is the mound,
You can tell from the sound,
It's over Anakin,
I have the high ground.
~
https://www.castlesworld.com/img/motte-and-bailey-castle_detailed_diagram.jpg
Ironically, "moat" de-confuses it for me. The motte is well-defended. A moat is a type of defense. Therefore, moat ~ motte.
As for bailey, the word just sounds nicer to me than "motte". So the bailey is the pleasant place to be.
If you start by reading about baileys, you'll probably get confused, since baileys are walled off, just like castles. (Baileys are a type of castle.) Better to read about "motte and bailey castles" first. Part of the point of the analogy is that *both* mottes and baileys are defensible; both have walls - it's just that mottes are *more* defensible.
Someone here called "Euler" had the best one, although it only works if you know your classic rock.
Mott the Hoople (featuring David Bowie)
David Bowie (the bailey): "All the young dudes..."
ALL the young dudes?
(Motte) Well not ALL the young dudes, some.
OK, then.
(Later. David Bowie again when the chorus comes around): All the young dudes...
Guess I'm still not sure which is which but it's a good song.
The bailiff is an important dude. He thus lives in the castle aka. the bailey.
Bailey starts with B. You'd rather "B" in the bailey.
Except during an attack, when you'd rather "B" in the motte?
Is this a "you look like Serhiy Mukhin" joke?
"An object has free will and is thus an agent if it can take decisions that are [] *free* (they can escape determinism) .... "
I know that this is the classical definition of free will but it is not what people seems to mean when they think that a decision is freely made by someone. So it seems a relatively boring semantic issue : if "free will" and "agency" are defined in a way that is different from the most commonly understood meaning of the words, then yes , people do not have this impossible free will. Il you remove the requirement of non determinism, which is odd in the first place and which clash with the most common use of "free" and "agency", then people have free will.
Yes - this is a good point. What is "free will"? It's a couple of English words people use to gesture at a certain set of intuitions and impressions. It's not as if those intuitions and impressions will go away just because some narrow definition of "free will" can be shown to be logically inconsistent.
My opinion too! I would even say that the classical philosophical notion of free will is not only narrow buit does not correspond to these intuitions and impressions (i.e the non determinism criterion does not appear at all in our intuitions and impressions of what acting freely means).
My point, which is by no mean original, was that the classical philosophical definition of free will requires non determinism,, whereas people common understanding of "acting freely" does not. People common understanding of "acting freely" seems to refer to the "position" of the motivation of someone's actions, internal (free action obv.) or external (no free action). There is of course not a qualitative distinction between free and non free actions, rather it is a question of degree.
If you do not like this definition, it can be noted that what is people understanding of free will is an empirical question. In France a few month ago, a youtuber who popularizes philosophy constructed a survey with many scenarios inluding varying amount of determinism and control, and then asked his viewers to say whether or not in these scenarios people acted freely. And the answer was clear: even when the scenarios are very clearly (emphasis on very!) deterministic, a large majority of people did consider that free action was present when the person in the scenario choosed to act based on internal motivations.
So for me, the whole debate can be summed up as :
- The usual, common meaning of "free will" is to be able to act without external obstacles. By this definition , people do have free will, which is coherent with the fact that people usually feel that they can act freely (sometimes!).
- If free will is defined in a non intuitive and non autocoherent way requiring non determinism, then this free will does not exist. People are upset when you say so because they feel that they do have free will, in the common meaning of the terms.
So philosophers introduced a non-intuitive, incoherent notion, called that free will and then said to everyone that because this strange notion did in fact not exist, then people could never act freely, even when they thought that they acted freely. It seems to me a really worthless philosphical notion, be it sure can cause endless discussions!
Philosophy doesn't have a single notion of free will. You are probably objecting to libertarian free will. Libertarian free will has not been *shown* to be incoherent here, although a number of people have *said* it's incoherent.
Yes, I was commenting on the OP definition of free will, which is I think the most common one and also the Libertarian one (if I remember correctly). I do not think that this notion is incoherent in the sense that it is autocontradictory I think it is incoherent is the sense that is not coherent with the usual, common langage meaning of "free will". May be incoherent was not the correct word, non native english speaker here, as you probably already have guessed.
I think that it is important to distinguish who is using the definition. Many philosophers do use the definition that you indicated, the one requiring non determinism.
But when lay people discuss the notion of acting or not freely (which is of course tied to the notion of moral responsability, retributive justice etc.) I think that the intuitive concept they use is usually the internel/external origin of the motivation for the action. BUT because this is an intuitive concept, there is no formal definition. People do not reason to determine whether this or that action was free, they feel that it was free or not. And yes, internal/external is partly arbitrary. Somes cases are clear cut in one direction or the other, some other cases not so much. This is not math!
I've been thinking recently about a definition of "free will" involving response to incentives. An agent can be said to have free will whether or not to do X if you could affect its decision using (somewhat arbitrarily defined) incentives.
Examples:
1. If you pay me a million dollars, I'll hold my breath for a minute, thus I have free will in choosing whether or not to do that. But I can't stop my heart beating for ten seconds, or stop myself from blinking for an hour -- those things are not subject to my free will.
2. Like many people I have a caffeine addiction, but if you pay me a million dollars to never drink a coffee again then I'll manage it, thus I still have free will regarding caffeine. But there are (I suppose) some heroin addicts out there who are simply unable to resist the pull of that next shot no matter how large an incentive they are offered; in a case like this it's fair to say that the heroin addict no longer has free will with regards to heroin.
3. The coin-counting machine clearly doesn't have free will, you can't bribe it to not count coins or threaten it into declaring a penny to be a dollar. (Of course if you don't give it electricity then it won't count coins, but calling that an "incentive" is abusing my vaguely-defined term.)
4. What about animals? Can a crocodile be incentivised not to attack a delicious water buffalo that strays into range? I don't know, which seems reasonable because I don't know whether a crocodile has free will. A dog can be incentivised not to eat the delicious treat balancing on its nose, so it seems fair to say that dogs have some kind of free will.
I'm thinking about this definition not in terms of animals and machines, though, but in terms of humans. People sometimes suggest that a criminal is not responsible for their actions due to certain circumstances of their life, but the way I see it, if a person could have been persuaded _not_ to do something by an incentive then they're fully responsible for what they did.
There isn't any point in handing out rewards and punishments to entities which do not respond to rewards and punishments, ie which do not have a reward function , or utility function, or desires, preferences, etc. So Basil Fawlty thrashing his car for not starting is silly. That pretty much gives you the compatibilist theory of free will. Compatibilism is able to say what kind of entity has potential free will -- an entity with desires -- and when free will is removed in actuality-- when it is unable to act on it's desires.
But compatibilism uses a narrower criterion than the the theory that free will is just decision making. That theory is unable to cash out the meaning of the "free" at all.
A computer can make decisions, but it can't be bribed.
Unless of course you program it to accept bribes. Honestly, I'm trying to come up with a definition of free will that matches the subjective experience of being human, not necessarily one that is robust to every adversarial edge case.
This seems like a helpful way of thinking about it. These sorts of discussions don’t often rise above “everything is predestined, no one can properly be praised or blamed for anything” vs. “that’s obviously wrong, therefore free will is real.” It’s actually more of a spectrum, isn’t it? The more compulsion or duress or constraints one is under, the less free one is, right?
I think this framework is much more useful than the usual confusion between methaphisics, decisionmaking and ethics. That said, there are obvious problems with it, especially while using it as a strict rule for moral judgement.
Imagine an extremely poor fellow who's stolen some bread to survive. Would he do it if he was offered one million dollars as a counter offer? Of course not. And thus he is guilty and doesn't deserve any indulgence.
On the other hand imagine a corrupt politician who steals millions of dollars of tax money all the time. Will one offer of a million dollars change his mind and make him not corrupt anymore? Of course not. And thus he had no freedom in the matter and isn't condemnable for the corruption.
The problem with the million dollars is that it isn't enough. The politician may well be persuadable by the threat of jail or execution.
Sure! But it hightlights the inherit problem of the framework - the relativity of incentives and how to deal with it. Is this person unfree and thus not morally responsible or just wasn't presented huge enough incentive? Are people who need larger incentives less free than people who need smaller ones? I think in a common sense, extremely poor person is less free than extremely rich one, but this framework lead to the opposite conclusion. What about cases when people sacrifice everything for the sake of a greater goal? Are they the most unfree people possible?
What do you mean by "incentives don't work in his case" and by "nor are they a deterrent to other serial killers"?
Obviously if serial killer is put in jail they won't be able to kill anybody else. Also the world where serial killers are left unpunished, will have more killers, all things being equal.
I’d argue that’s more like putting down a rabid dog. Less about incentivizing morality, more about exterminating public health hazards
"an agent must be able to take decisions distinctly from the two forms of mechanism-decisions above"
Since you have defined both of them as extremes, there's a compromise available.
"If the decision process can escape determinism and give multiple different answers to the same question with the exact same initial conditions, then it's not meaningful. If the decision process will rigorously and logically give the same answer to the same question in the same initial conditions, then it's not free."
A partly determined, partly.random.event is still.partly determined so not entirely meaningless. It's not wholly free either, but complete freedom is unrealistic.
What is your true objection? are you saying that a mixture of determinism and randomness is *inconceivable* -- or just that it doesn't exist?
" unless the computer is literally relying on quantic phenomena (but as I noted, even those are not above suspicion)."
It's true that we can't completely rule in fundamental randomness emerging from the quantum level, but that doesn't mean the burden of proof is entirely on the indeterminist -- that would be a selective demand for rigour.
Causality and indeterminism aren't hopelessly incompatible. The less of the one you have, the more of the other you automatically have. Pure randomness would provide not basis for science, since anything would be equally possible, and even statistical laws would be impossible. Bur statistical laws, at least, can be found in all areas of science including quantum mechanics. QM allows exactly even probabilities between a very constrained set of possible observations such as "spin up" and "spin down".
It also has "spectral" operators, where there any value can be observed with non-zero probability, but there is nonetheless an "expected" value that is more likely than any other.
"My intuition is that all randomness is an artifact of a system being too sensitive to initial conditions or too complex to be intuited."
The "hidden variables" idea has been thoroughly investigated.
This isn't the objection you made in the first place.
I recently read this article: https://www.thatdoesntfollow.com/2022/02/lotteries-and-chess-engines-possibility.html
It gives a nice intuitive sense to what it means to say an agent "could have done otherwise", though formalizing it is difficult, bordering on impossible. You might choose this as your definition of free will, it seems more useful than a logically incoherent one.
Cool article - I like the examples from Dennett’s “intuition pumps.”
> I think a further problem is that when people express a moral judgement on the basis of a "could have", it's not an epistemic "could have"; they're not saying "it was within the realm of probabilities that you could have made a different choice, but my prediction was wrong, oh well". It's a metaphysical "could have"
It's still an epistemic could have. We are just talking not about epistemic uncertanity of an observer over a person making a decision, but about the epistemic uncertanity of the person who is making the decision. When you are making a decision you don't know yet what you will chooose. You can frame it as deciding whether you are a wicked person or not. But i don't think that such attribution errors are helpful.
When you know what you will do, you can't choose - you'll just do what you will. You just follow the script of doing the thing.
When you don't know what you will do, you have to choose. And this choice determine the future. After the choice is made you may call the outcome inevitable if you feel like it, but in the process of choosing it's just one of the options due to your epistemic uncertanity. And that what matters for morality.
And of course you preferences affect your choices. Your preferences are part of you, and its you who make the choice.
"Probabilities aren't properties of systems, they're a measurement of our uncertainty about the system; but the system itself is still 100% deterministic "
That is not a fact. Whether there is fundamental indeterminism is a currently unsolved question. (Of course, there is subjective, knightian, uncertainty as well).
I think it might be better stated as that we are not entirely sure what the definition of "determinism" is. It can't be "if you know everything, you can predict anything" because that's a truism: if you know everything you know everything, and there is no "prediction" in the usual sense of the word.
So it has to be something like "if you know x% of everything, with x < 100, then you can predict y% of everything, with y > x." But is y = 100 even conceptually? If it isn't, does that mean what is not predictable isn't a fact, or is it just inaccessible, and if it is, is that inaccessibility fundamental or practical? Et cetera.
While a lot of these questions are actually pretty answerable practically, meaning in any way they affect human lives, they still seem to be largely undecided (if not undecidable for lack of good definitions) philosophically.
"if you know everything, you can predict anything"
It can mean "if you know everything about the past, and you know exactly how past states evolve into future states, you can predict everything about the future". And does mean that.
Why keep making an exception for QM? It's fundamental physics.
If you believe in free will and you’re wrong, it’s not your fault. If you don’t believe in free will and you’re wrong, it is your fault.
“If the decision process can escape determinism and give multiple different answers to the same question with the exact same initial conditions, then it's not meaningful.”
I’m sure you didn’t notice while setting up the hypo, but this premise is begging the question (you accidentally smuggled in the idea that anything which is not fully determined is random), so I don’t think you can conclude anything from the argument.
Jeez, Godoth, our views of this issue seem pretty compatible. Yet we sort of fucking hated each other by the end of our argument about dealing with covid misinformation published on Substack. I am not being snarky here, more saying life is strange. It is a relief to have a feeling of agreement and respect for you -- hope you're having some version of the same experience as regards me.
You assume that the “weather is deterministic.” But as far as we can empirically tell, weather is probabilistic. We cannot demonstrate that weather is deterministic.
Likewise, QM is, empirically, probabilistic. It is certainly not random and certainly not fully determined.
It’s important that you realize that what you’re doing is not making empirical observations, what you’re doing is taking very small measurements and then assuming that what you believe you observe maps to incredibly complex systems that you can neither predict nor understand. You do this because it has worked well enough with much simpler systems, but that doesn’t make it true.
You think that will must be deterministic because you assume that most systems must be deterministic or theoretically random. But will might be probabilistic. Or it might be something else that you don’t grasp yet. Certainly the phenomenal experience of will is that it is neither random nor fully determined.
The better position here is to realize that you have the burden of proof to demonstrate that you can fully understand and predict a probabilistic system like weather or QM. Only then will intellectual rigor allow you to begin dismissing alternative explanations as improbable. Until then, the assumption of determinism is just a neat personal philosophy.
We know more than enough to know that in principle weather is predictable up to the limits that quantum mechanics itself imposes. In physics quantity does not have a quality all of its own, and we can readily infer from what is true about 6 degrees of freedom to what must be equally true about 6 x 10^23.
"My assumption is safe because I further assume that all other systems are qualitatively identical to the limited experimental systems I can fully predict" is not an argument I find convincing in the least.
Extreme sensitivity to initial conditions is just the thing to amplify quantum indeterminism to the classical level.
Those arguments are not very strong.
1. Only shows subjective unpredictability *can* arise from objective determinism, not that it *must*.
2. Tacitly assumes that all unpredictability must be assigned to the objective indeterminism of the system itself, if objective indeterminism is admitted to exist at all. But believers in objective indeterminism do not have to be disbelievers in subjective unpredictability. Indeed, subjective unpredictability based on sheer lack of information is inevitable anyway.
>but presumably, the kind of definition that interests most people is one that allows to distinguish between agents and mechanisms.
I don't think this is a helpful or interesting framework. By distincting agents and mechanisms in two separate clusters you implicitly smuggle the notion that agent isn't a mechanism. This notion fits our initial spiritualistic intuition, but is heavily contradicted by evidence accumulated by materialistic science. I could rant for a very long time how this is the main source of confusion about free will, but let's choose a better framework instead.
The interesting distinction is between agents and non-agents. Those who have decision making ability and those who don't. A chess program, which evaluates the situation on the desk and then makes the best move according to its utility function is a decision maker. A chess program that just output a specific sequence of moves, unrelated to the situation on the desk and has no utility function is not a decision maker.
"The interesting distinction is between agents and non-agents. Those who have decision making ability and those who don't. A chess program, which evaluates the situation on the desk and then makes the best move according to its utility function is a decision maker."
Do you think it makes sense to blame or punish a chess programme? If not, "decision maker" is the wrong way of cashing out "morally culpable agent".
We literally reward and punush the model while we train it. Not only it makes sense, its essential for the program to work. Also check my reply to Machine Interface below.
The main difference is in the complexity of human values compared to the chess-program values, as well as number of "possible moves" due to human's ability to act outside the chessboard. But it's just a quantative differance. Bellow you asked specifically about morality so lets construct an example of an ethically responsible chess program.
Suppose we take a decsion-making-chess-program optimized to win games and then try to make it care about other stuff, considered to be in the realm of morality. Let our program have an additional video input chanel allowing it to see the face of the opponent. And let the program reward function represent the emotional satisfaction of the opponent in some way, based on the state of their face. For the simplicity of our example, lets ignore all the issues with Goodheart's law and just assume that it's a valid approximation.
What happens to our new program during its training? Sometimes it'll make a very effective moves, which will lead to opponents being very sad and showing it on their faces. Such moves will be punished. As if the program have moral responsibility. And indeed, eventually the program will learn to value the visible emotional state of the opponents and become "more ethical".
The purpose of this mind experiment wasn't to encompass all possible philosophical stances on morality. It was to show how we can make a machine more ethical through the same process we use to make it effective, just by adding a new dimention. It's a demonstration of the idea that the perseived difference is just quantitative.
I do not equate empathy with morality but the idea of universal morality among all possible minds seems obviously wrong to me. Ethics is just the sphere of shouldness and utility functions can be quite arbitrary.
Obligatory smbc https://i0.wp.com/www.smbc-comics.com/comics/20160614.png?zoom=2
Well obviously there are many large differences between the two. But you need to ask a question that gets at the difference some might feel is the crucial one. You need to ask whether the 2 situations are different in a certain respect -- *but* ask the question in a way that doesn't smuggle in somebody's preferred answer in the question's underpants.
Well, I think the phrase "we can say" in your question is the element that smuggles in your preferred answer. Because you don't really literally mean *we are able to say such-and-such*, right? Because if what you and I are able to say decides the right answer here, then I'll just say "nope, buddy, chess programs and men are identical as regards freedom of choice" -- and we're done (until you come back at me and say I'm wrong). Seems like what's hiding in your phrase "we can say" is something like the idea that moral judgments of men feel meaningful to us and moral judgments of chess programs feel like nonsense, and that the difference in our feeling about the 2 judgments is indisputable evidence that the men and chess programs differ in some way as regards free will & related matters.
Edit: to add a later thought about subjectively experiencing things as morally judgeable. Of course I feel the same as you do: I experience people, but not chess programs and other inanimate objects and processes, as morally judgeable. Still, there have been some occasions when I have experienced exceptions to that generalization. There have been a few times when I have experienced somebody as being beyond judgment. Sometimes this has happened regarding people I am close to, where I am so moved by what I know they feel that I lose any capacity for judgment about what they do. Sometimes this has happened with writers and thinkers I admire -- I find out that the person was an antisemite or a wife-beater and I think, "I don't care, it's irrelevant." And there have been times when I have experienced inanimate objects as evil -- somebody's cancer, for instance. Yes, you can say calling the tumor evil is just a way of expressing distress -- but what I was experiencing felt like an actual judgment about the tumor's evil nature, not a metaphor.
Anyhow, the point of all this is that even if we used your criterion of "feels judgeable" as a litmus test for whether or not an entity has free will, the test does not work perfectly. And my guess is that as AI becomes more advanced, there will be more situations where a man-made, nonhuman entity feels judgeable to us. In fact, come to think of it, I already experience Facebook as stinky, rotten and evil (and I'm not talking here about the internet shitlords who make decisions about about the damn thing, I'm talking about Facebook itself).
so its not that our universe doesnt have free will, but that free will is definitionally impossible?
I think it's only definitionally impossible if you set up the question and define the terms the way OP did. Calling will "free" gets a metaphor going in your mind in which action is either "determinist" or "free," i.e. either constrained by circumstances that function as jailors; or not constrained, i.e. no jailers present. But this way of framing the question sort of sets things up so that everything that is not random (independent of everything else, and unpredictable) is "jailed" and unfree. So since the coin sorting machine does not give random results, it's "jailed." But what is this jail the coin sorting tray is in? Turns out the "jail" is just the construction of the coin-sorting box itself -- the various slots and holes and ramps that guide each coins into their proper place. But that's a weird way of. framing things . The slots and holes and ramps aren't the jailers that keep the tray from freely choosing where to place each coin.They are parts of the tray -- the structures that implement the trays's choosing process. The sorting tray "chooses freely" where to put each coin. It makes and carries out that choice via a structure of slots, holes and ramps. So now if you up the size and complexity of everything, and start talking about people and their synapses, and do they freely choose how to move their hand or are they "constrained" and "determined" by the laws of physics and chemistry that govern the behavior of brain cells, nerves, muscles, etc., it's still cheating in the same way. The brain cells and nerves and muscles and synapses aren't determinist constraints on the person's choice of how to move their hand -- they are the physical and chemical structures and processes via which the choice is made and implemented.
In your definition, the inanimate sorting tray has free will?
Well, I'm rejecting some of the terms in the way the problem is stated, including the term "free will." What does that term even mean? Using that term implies all kinds of stuffk, including the idea that there's a contrasting phenomenon, or maybe more than one: Will that's constrained by being in a box -- will that used to move freely among the possibilities choosing some but not others, but now is quadraplegic -- will that gets kind of bossed around, but sometimes gets to do what it wants. None of that makes any sense. (And for that matter, what does "will" really mean?) What I'm saying is that people and the sorting tray are both making choices, but in the case of people the mechanism via which the choices are made is so complex we can't see the choosing process, and feel very tempted to think a process of a whole different kind is going on.
How do you separate being rewarded as an incentive vs being rewarded on principle? As I understand it the whole "on principle" thing is just an intuition for an incentive.
By your incredible logic, each beating of our heart is an act of free choice because the physical systems that beat our heart are simply structures and processes by which the choice is implemented.
It's weird that you accuse OP of smuggling things in, when you're done just that, without explaining what it could possible mean for a person to make a choice independent of the mechanistic workings of their brain.
Well, I did use the phrase *choosing freely* in my response, above, but I did that in an effort to make my point of view more comprehensible to OP, to sort of say what I thought in a way that used some of OP's concepts. I have gone back to my post above and put the phrase in quotes, so that sentence now reads "The sorting tray "chooses freely" where to put each coin." In my view, the phrase "choosing freely" is an inhabitant of OP's way of framing the question, and, like OP's framing as a whole, is sort of incoherent and self-evacuating. What does "choosing freely" mean, exactly? Phrase implies that there's an opposite to choosing freely. What the hell would a meaningful opposite to choosing freely? Sure, you can have a guy with a gun to your head saying he'll kill you if you don't take your hat off, but then if you take your hat off then you're choosing to take it off to avoid the bullet, so you're still "choosing freely". Can you think of a meaningful opposite to free choice, and explain how lack of freedom would come about?
Seems to me that the phrase "choosing freely" is sort of like "wet water." WTF is wet water, as distinct from plain water? What's the opposite -- dry water?
'What the hell would a meaningful opposite to choosing freely? Sure, you can have a guy with a gun to your head saying he'll kill you if you don't take your hat off, but then if you take your hat off then you're choosing to take it off to avoid the bullet, so you're still "choosing freely".'
You're not choosing freely because you are not acting on the desires you would have acted on otherwise.
So there are at least two opposites to "choosing freely": lacking the basic capacity -- lacking desires , agency; and being under compulsion in a particular situation. (also, you can have compatibilist free will but lack libertarian free will).
You haven't posted on this thread for a while, and may have grown tired of it. I haven't though, so thought I'd throw out one more thing, about the idea of there being situations where there are constraints on someone's choices. Please do chime in if you're still interested. I love this stuff.
Seems to me that all sort of things can be conceptualized as interfering with someone's doing something, and the various things are wildly different in nature, in category, and in the sense in which they prevent someone from doing something. Here's what I mean. Think about the difference in the various "constraints" at play depending on how this sentence ends.
"She was not able to read the page because . . .
. . . he said he’d shoot her if she did.”
. . . she did not have her glasses.”
. . . the words on it were written in a foreign language
. . . she was sure the message there would upset her terribly.”
. . . every time she tried to read it she somehow instead just moved her eyes over the lines of print and understood what they meant.”
. . . she was a dog.”
. . . the page did not exist.”
. . . she did not exist.”
I understand your first example of not choosing freely (under compulsion, gun to head) but can you expand on the second one -- "lacking desires, agency" -- and give an example or 2?
But what if the coin-sorting tray is conscious, and is just so damn dumb and limited in its grasp of things that it experiences itself as freely choosing where each one of those coins go? And, if you expand the coin tray & its consciousness and the variety of its choices and complexity of its repertoire enough, you get us. You get me, for instance. I experience myself as freely choosing to write this, but of course with advanced enough tech you could trace all the synaptic firings, etc. that produced these words.
"but of course with advanced enough tech you could trace all the synaptic firings, etc. that produced these words."
So? That.doesnt mean you're not choosing .or freely choosing .. it just means that choosing had moving parts,.and doesn't appear out of nowhere.
According to science, the human brain/body is a complex mechanism made up of organs and tissues which are themselves made of cells which are themselves made of proteins, and so on.
Science does not tell you that you are a ghost in a deterministic machine, trapped inside it and unable to control its operation.: it tells you that you are, for better or worse, the machine itself.
So the scientific question of free will becomes the question of how the machine behaves, whether it has the combination of unpredictability, self direction, self modification and so on, that might characterise free will... depending on how you define free will.
Yes, exactly. That means that you don't have free will in any common or ethically meaningful sense.
That’s like saying faces are made of atoms and, therefore, you don’t really have a face.
If faces didn't really exist we'd have had to invent them anyway. Heh.
I read it more as saying that just because the program isn't complex enough to understand how itself runs, does not make it any more/less deterministic/free. Just because I can't calculate my own brains decision process does not mean that process isn't deterministic.
Who was it that said “If our brains were simple enough for us to understand them, we’d be so simple that we still couldn’t”? But just because a brain is made of atoms doesn’t mean there’s no such things as brains. Your decisions and actions are surely made of the untraceable concatenation of a zillion deterministic + random micro-events. Yet people still decide, and act - whether freely or under compulsion/duress.
Liked this quote so much that I hunted around to find who said it. Here's what I found:"The earliest evidence known to QI appeared in the 1977 book “The Biological Origin of Human Values” by George Edgin Pugh who was a nuclear physicist and the president of a company called Decision-Science Applications. The statement was used as a chapter epigraph with a footnote that specified an ascription to Emerson M. Pugh who was the father of the author. Both the father and son were physicists, and Emerson was a professor at The Carnegie Institute of Technology:[1]"
Found on https://quoteinvestigator.com/2016/03/05/brain/
Could you explain in details how the fact that it's possible to trace back the synaptic firings invalidates ethics?
.... At which point we run into the Hard Problem of consciousness.
Well, the problem of Consciousness gets Hard as a result of someone's framing it in a certain way: "Consciousness is a thing. I know it is, I'm experiencing it right this moment. So are you. You know it's a thing. And yet, and yet -- it's a completely different kind of thing than all others things. It's not matter, it's not energy, you can't see it, you can't smell it, you can't measure it. Yet it controls all kinds of stuff. It controls the hand that is writing these words. " Etc.
The origin framer of the hard problem was David Chalmers, but he did not define consciousness as nonphysical. He did define it as having subjectively accessible properties. And why not...it does. If I am not to refer to my subjective access to my mental states as "consciousnes" , then I need some other term, becausnits still there. There's.a difference between simplifying a problem, and switching the topic to.a different, simpler problem.
Not arguing here - just wondering aloud. Do all languages have a word for consciousness, or something sort of like what we mean by consciousness? As I think this over just now, it does not seem to me that *consciousness* is such a widely useful concept that every language would have to come up with a word to capture it. Seems like any language would have to have a words for non-conscious states -- "asleep," "stuporous," "dead" -- and a word for "self" or "I." But "consciousness"? Maybe not that useful, outside discussions of this kind. Maybe a word whose meaning is pretty hard to convey to somebody who did not grow up with that concept being one of the bricks in the house of common sense.
I don't think it's useful, but it it's still there. Stars and galaxies aren't useful eifher.
For what it's worth, I think Spinoza would agree with you. And we can go all the way back to Aristotle in terms of having a cause for every effect (at least until we get to his Unmoved Mover). But assuming absolutely everything is 100% deterministic may be putting a bit more weight on that assumption than it will bear. AFAIK, nobody has refuted Hume's skeptical observations regarding causality and the problem of induction - if past performance is no guarantee of future results, isn't causality itself just a useful guess? And if we have no logically coherent account of causality, then we certainly haven't got one for determinism.
I feel like Schroedinger’s cat probably has an opinion about that, but I don’t, actually - cheers!
I admit I don't spend much time thinking about this one so these might just be D-student thoughts, but 2 questions immediately come to mind:
- Why would such a simulation be useful in the first place? Like I said, I haven't read much on this so maybe it's obvious and I'm just missing something, but it #3's likelihood hinges on their being some use for "simulations of evolutionary history." I don't know what we would use that for now if we had the capacity to run it; is there a value proposition I'm missing?
- Assuming such simulations did have value, wouldn't they a simulation capable of figuring out that it was a simulation be functionally useless to a posthuman civilization as a data point? In that case the mere fact that we are capable of having this conversation points toward us not being a simulation - or at least that nobody's noticed "aw, shit another one figured it out" yet and deleted our trash dataset to replace us with a clean one.
Here's a possible counterargument I've never heard:
Suppose that you live in a society where energy and computing power are so abundant, and simulation algorithms so advanced, that it's possible to simulate entire societies of humans just for shits and giggles. What do you choose to simulate?
Maybe some weird history geeks would simulate societies in the distant past. But the real benefit is in simulating the present. Simulate your customers, so you'll know what they're willing to spend money on. Simulate your competitors, so you'll know what they're up to. Simulate your enemies, so you can defeat them. And most of all, simulate a billion different versions of yourself so that you know how you'd respond to various scenarios.
(And if you feel bad for all the versions of yourself that you're abusing, simulate a few billion versions that are perfectly happy to make up for it.)
Given that "present" simulations are likely to be much more useful and hence more abundant than "past" simulations, if you live in a simulation you're much more likely to find yourself in the _present_ of a society that can do these sorts of simulations than the distant past. Since we find ourselves in "the past", it is more likely that we are in the real world and that these sorts of simulations will never exist.
I would also expect lots of other simulations to be some weird sex things, or very gamefied experiences. The fact that our world isn't like this is an evidence in favour of not being in a simulation.
I have troubles accepting "thus these sorts of simulations will never exist" conclusion. It requires assuming that we are randomly selected to exist amoug all possible people among all times which seems not to be the case: we have causal link between past, present and future. Ability to deduce that there is no future because it's not now seems like cheating and definetely not how cognition engines produce valid knowledge.
If such anthropics reasoning actually worked it would mean that we can dramatically increase our chances of survival as a species by precommiting never to simulate consciouss beings and strictly controlling our population.
What is the difference between simulating consciousness and creating consciousness? And what is the difference between a posthuman simulator and a God? I think your ethics thought experiment makes this even more explicit, with a benevolent creator that creates a universe of conscious individuals so they can experience the joys of existence. In my faith specifically, the Church of Jesus Christ of Latter-day Saints (aka Mormons), your thought experiment comes surprisingly close to our doctrinal worldview on more than one level.
The obvious difference is that we would have no moral obligations to the posthuman simulators beyond (maybe) what adult children owe their parents. They certainly aren't the fount of morality, and if they tell us to slaughter the Amalekites or whatever we should say no.
Not too much is defined in specific details, unfortunately, although this is a fun area for speculation within the faith and by those who would mock our beliefs. But to put it short, we take Paul more literally than most Christians when he taught that we are “the offspring of God” and “we are the children of God: and if children, then heirs; heirs of God, and joint-heirs with Christ.” 19th century Church leader Lorenzo Snow gave us the couplet “As man now is, God once was: As God now is, man may be,” which is about as much detail as we have but is full of intriguing possibilities. The Church has a nice article on the topic of “becoming like God” here: https://www.churchofjesuschrist.org/study/manual/gospel-topics-essays/becoming-like-god?lang=eng
Obviously one difference between LDS beliefs and your benevolent simulation is the idea of an immortal soul and Heavenly afterlife. Just a small detail, but who can say what posthuman technology will and won’t be capable of. 🙂
If there ancestral simulations which simulate me, than myself in simulation execute the same algorithms as myself in reality. When I make decision in one of the simulation, I also make the same decision in every other and in real world. At this moment the distinction between me in different simulations doesn't make any sense. We are the same entity.
The purpose of multiple simulations is to wiggle your independent variables and watch the output change
I would assume that ancestral simulations are approximate-- based on limited information and incompatible theories.
How similar do simulations need to be to count as the same person?
If it's not the perfect simulation, I'm not sure the simulation argument works at all.
How good does the simulation need to be? I can imagine small changes (doing a thing five minutes later or earlier, for example) which would almost never make a difference.
You change the personality inputs, if we set Nancy's irritability to 6, what is her response to situation XYZ
I see it as you are the subject in the simulation, how does design "Parrhesia" perform in simulation XYZ with irritability set to 6.
I believe that other sorts of simulations are going to vastly outnumber ancestor simulations-- artistic simulations, scientific simulations, playful simulations.... I'm not sure how this affects the odds of *our* being in an ancestor simulation, but I think it's less likely.
Agreed. The vast majority of simulations will be for entertainment or some sort of productive purpose (ie: simulating multiple agents to all work on a research/technical problem in parallel).
I imagine that the overlap between "enough of a geek to want to see what the 2020s were like according to our best models" and "has the resources to simulate a solar system filled with ten billion sapients" to be limited to whatever the god/alien/post-human equivalent of universities are.
I consider consciousness to be the act of feeling. Is a computer system, like a modern automobile conscious if that system can feel (diagnose) problems? Perhaps. We are stressed if we feel pain, or some degradation of our physical bodily systems. Does a car feel pain if some system is distressed? Is a car distressed if there is a warning lamp illuminated? Is our distress over-rated.
My distress over the concept that we may be living in a simulation is this. 1) Someone is running the simulation, that would be God, or Gods. 2) A simulation has a purpose, that purpose is to evaluate designs. If we are in a simulation, we are the designs being tested for robustness. If this is the case, there is a higher motive for testing designs for robustness. This leads to the possibility that there is a God, and God is testing the designs—us—to find if we are worthy to proceed to the next step. Which is the basis of Christianity, if we don't meet the goal of salvation, we don't achieve the entrance to the next step (heaven). Or perhaps there is a Buddhist bent, where we all need to fulfill each circuit to achieve Nirvana. Perhaps we need to obey each of the 613 commandments of Judaism, or perhaps Judaic Law is just one circuit in the big wheel.
> My distress over the concept that we may be living in a simulation is this. 1) Someone is running the simulation, that would be God, or Gods. 2) A simulation has a purpose, that purpose is to evaluate designs. If we are in a simulation, we are the designs being tested for robustness.
Our universe might just be a grade school science project. The posterboard behind the universe says "Proof that intelligent life can evolve in a universe with only 3 spacial dimensions and 1 time dimension". This year he got an "A" on his project because of the added constraint "With mostly complete conservation of energy + entropy".
Everyone was forced to make a 6+3 universe in kindergarten.
One of the smarter people I know holds that the mis-match between classical and quantum physics is proof of the simulation, precisely because it appears to be the work of different authors with wildly different skill levels.
Basically: we're a group-work project, and the being responsible for quantum physics slapped their part together at the last minute.
He used a lot of computing power for very few planets of interest.
Computer? Planet? Power? That's one of the silly parts of the simulation hypothesis - "this place is fake, therefore I know all about the true real world". You wouldn't even know if time, energy, and place are real.
Anyway, the science project wasn't a simulation, he made a real big bang in a test tube, then let it age for 14 billion years. It was a rush job, since the science fair was the next day.
The assumption is a computer simulation. If the simulation is something else that we can never understand, then we are in the realm of magic.
I really don't think we can deduce much of anything about the higher levels. We make comedies, tragedies, naturalistic fiction, etc.
I think it's reasonable to assume that higher levels will have more goals than we can imagine.
Does a car feel pain? I’d say, no.
Unless I'm reading it wrong, Bostrom seems to assume by fiat that the simulation hypothesis could only involve posthumans simulating virtual versions of their past. …Why? Sure, posthumans, or other beings recognizable as "humans", running a sort of high-resolution Sims game is a *plausible* answer to "who would simulate the universe", but it doesn't seem like the only one. I, for one, often default to the supposition that we're probably being simulated by extremely strange "aliens", and that it's quite possible that our laws of physics don't look very much like the rules governing the "real" world (or, if you prefer, the world one level up from ours in the infinite recursion of simulations).
I don't think this is an assumption of Bostrom above. If there are other plausible sources of simulation, that only makes (3) more probable. For you to be a likely simulation, it's only necessary that there be *one* likely source of other simulators - adding more doesn't weaken the case.
He didn't call it, just accurately described the likely consequences of a hypothetical one. The idea of it is described as espoused by bloodthirsty and clueless political observers, which is darkly amusing in retrospect.
Yes. Looks like he did not expect an invasion because he knew it would be such a bad idea: <<In general, there will be no Ukrainian blitzkrieg. The statements of some experts such as “The Russian army will defeat most of the units of the Armed Forces of Ukraine in 30-40 minutes”, “Russia is able to defeat Ukraine in 10 minutes in the event of a full-scale war”, “Russia will defeat Ukraine in eight minutes” have no serious grounds.
And finally, the most important thing. An armed conflict with Ukraine is currently fundamentally not in Russia's national interests. Therefore, it is best for some overexcited Russian experts to forget about their hatred fantasies. And in order to prevent further reputational losses, never remember again.>>
If he thought Putin would actually launch a blitzkreig (which is what happened—edit: not actually a blitzkreig but something resembling one), I think he would have have struck a different tone toward the hawks.
Even higher up Russians had very little heads-up, actually, there was a lot of secrecy, so he may have been working more on intuition.