Background: MS in Molecular and Cell Biology. Next to no neuroscience. Last paragraph is best.
My first reaction is "crackpot". I could theoretically see an RNA-to-memory and back translation working, but much too slowly for human thought. Also the machinery for that would be *complicated* and it would be weird that we haven't already noticed it. Thirdly, what happens when you try to recall a memory? Generally the way cells handle information, like the genetic code, is by keeping one copy for long-term use and making copies of it every time the information is needed. So when you're wondering "what was that word again?" or trying to recall an event or image- does the brain go and make copies of every memory RNA and pop them up for comparison? That's a lot of effort for just one memory!
You could argue that there are ways to get around this, like maybe the brain has a way of focusing effort on the neuron where it knows the right memory is. A signal is directed to the right storage place, which has a signal for activating other neurons in the correct pattern to form a thought. But if the neurons are already sophisticated enough for that kind of mapping, why not cut out the middle man? Why the added complexity of the RNA system? Nature hates waste, and systems that are both expensive and complicated tend to get the boot.
It kind of sounds like you might be getting some real science from the wrong sources, though. There is in fact something known as siRNA ("silencing" or "short inhibitory" RNA), which is used to identify and defend against viral RNA found floating around in the cell. This is indeed something that is inherited across generations in some species, and is sometimes called a "memory", but it has nothing to do with the brain or thinking. It's an immune reaction, and was first discovered in worms and plants.
No, thank you! I don't have a job that lets me use my degree, so this is really refreshing. Hence my unnecessarily long and multifaceted post.
The article doesn't have nearly enough technical detail for me to seriously evaluate it. It's written so vaguely that it could be the RNA just codes for a stress response protein, which was discovered decades ago, though I assume that's not actually what it is.
Basically the "memory" they're talking about amounts to just a "danger" signal- nowhere near as detailed as what we would think of as memories (people, places, events, words), nowhere near as interesting, and probably closer to a hormone than a conscious thought.
>Why do you feel it would be too slow?
Remember that while RNA can carry a lot of information, it's not particularly easy to decode. RNA is already used to code for protein; the process uses up a fair amount of energy and can take several minutes for long RNAs (~20 seconds for short ones, supposedly). Given that on occasion you *can* remember certain facts in less than 20 seconds, an RNA-memory system seems unlikely. Moreover, the amount of information you could get off of an RNA molecule in 20 seconds would be very small; the genetic code only uses 20 (22?) characters.
>Why would it be weird if we hadn't discovered this?
There is no way that this system is as simple as RNA->Neuron firing. Generously, it would look like RNA->adapter protein->Neuron firing, or more likely RNA->adapter protein -> signal cascade (involving more proteins) -> Neuron firing. Complex RNA systems, exist, but there are very few of them. That small number isn't there because it's particularly useful; they're there because they're doing something so important that any cell that tries to tinker with and improve upon them dies instantly. So I would expect any such system to involve protein, and be very noticeable.
Moreover the machinery we *know* already works with RNA is ribosomes, which translates them into protein and which are large enough that we could see them on microscopes in the 1950s; it would be pretty weird to learn that there's been this smaller, possibly less energy intensive way of processing RNA that's just been invisible in the background this entire time.
>I don't understand why you'd need to make copies of the memory RNA
For the same reason that DNA makes an RNA copy before making a protein- to prevent the information from getting damaged. The DNA stays safe and compact in the nucleus and lasts your entire lifetime, while RNA ventures out into the cytosol and is ripped apart in about 20 minutes. Not keeping a copy might be a cheaper way of keeping memories, sure, but dangerous. Imagine waking up every morning and not recognizing your family. (Admittedly there are actually protective structures on RNA that would make it last longer, but not long enough, I think. Also the cell deliberately removes them after processing is done.)
(Also admittedly, part of the reason for this quick decay is that a. RNA is less stable than DNA, and b. RNA is usually single-stranded, and less stable than the double-stranded DNA in the nucleus. You could theoretically use double stranded RNA, sure, except that most organisms from humans to snails to house plants are on a constant search-and-destroy for double-stranded RNA, since it usually belongs to a virus.)
>You didn't ask, but...
even if RNA was a good storage vehicle, it *cannot leave the cell it starts in*, at least not at neuron-speeds. The nerves themselves would need to have some way of passing along that complexity, and mostly they are not much more complex than a light switch- they can be on or off, and they can't be on for very long. Nerves *can* usually send more than one type of signal, but not more than one at the same time, and not in rapid succession. Asking them to match RNA is really pushing their capabilities.
>The following is uninformed, irresponsible speculation
Where the first tier system is a bunch of neurons that recognize "lines" or "curved lines" all feeding into neurons that recognize "rectangles" or "circles", which feed into "pyramid-shaped" and "egg-shaped" neurons, which feed into "face" and "room"... etc., eventually forming a whole image. And I imagine memory as being an extra neuron at each of those tiers monitoring which neurons fire. And when you go searching for a memory, that extra neuron simply fires back the signals it noticed earlier, letting tier 1 activate tier 2, activating tier 3, etc., thus letting the memory recreate itself.
I've been told that stone tools were for a very long time and over a very wide area so uniform that humans' ability to make them might have been genetic rather than learned.
Are you sure you mean RNA? RNA in general is very short-lived[1], with half-lives measured in minutes, except for RNA that makes up functional structures like ribosomes (and even they tend to get recycled after a few weeks). The whole point of DNA is that it's way more stable than RNA, in part because the more reactive parts are buried inside, and in part because of its built-in error-correction code.
Also, we only get RNA directly from our mothers, because the sperm contributes *only* DNA to the fertilized zygote, so far as we know.
The problem with memory being stored by DNA (if this is what you meant to say) is *how* the "memories" get turned into new DNA. It's not completely impossible, of course, since there are mechanisms for reverse transcribing RNA, say, into DNA and inserting it into the genome. Viruses have done this for millions of years, and we think there is ample evidence that some of these genes have become part of the human genome in general. But we don't know of any such capacity to convert the consequences of lived experiences to permanent changes in the DNA of germ cells (ova and sperm), which is the only way they would be heritable.
Plus we *already* have a way for the experiences of one generation to translate to changes in the DNA of the next: natural selection. Only the people with the right DNA get to survive, and if some of that DNA codes for instincts, as it surely does, then that can be considered a form of genetic "memory."
I don't know, I don't think I follow the arrows well enough to know what they mean. Are you saying life experience changes the proteins expressed in cells (which it definitely does), and that these can be heritable? There's plenty of evidence that the mother's experience during pregnancy (and to some extent a bit before it) affects the newborn via epigenetics, and given the way development builds on previous states, these changes may be permanent (for the child), but even here there's no heritability. If the child does not experience similar streses when the grandchild is born, the effect won't be repeated.
What I'm saying is we don't know of any mechanism that would allow detailed life experience to turn into part of the DNA in the germ cells. Basically, there's no way for an ovum resting comfortably in an ovary to become aware of what the neurons are up to, short of some very broad crude shared cellular experience like there's a shortage of glucose all the time.
Or maybe another way to put it is that we're more a colony creature that we may feel: there's a part of our bodies that is concerned with reproduction (our reproductory system), and there's a part that is concerned with survival for ourselves (e.g. our brain), but these two parts don't really talk to each other much. Ova can't yell up at the brain "Holy Christ, we're getting old here, DNA is getting methylated left and right, hurry up and find a suitable sex partner wouldja?" Neurons can't yell down "You know being short has turned out to be a real handicap, and while I'll do my best to find us a tall mate, could you maybe tweak *our* genes for leg bone length to make sure?"
That sounds like a scientific question, so you're saying we should put more resources into science.
In particular, though, it's a *very hard* scientific question, one that may not be resolved for decades or centuries, so you don't want to put *all* resources into the task - you need to pace yourself with these sorts of things, or everyone starves to death in a month and you don't make much progress.
Science is the process of formulating and testing hypotheses about what is real. "Which gods are real?" is a scientific question, if indeed an extremely difficult one due to the lack of obvious ways to observe afterlives or higher realms.
What answer are you looking for? You link to yourself pointing out flaws of applying utilitarianism, so are you asking for a steelman/modification of utilitarianism that doesn’t have these problems?
I believe Meta also believes utilitarianism is flawed. I expect Pascal's Mugging (https://www.lesswrong.com/tag/pascal-s-mugging) and other critiques are already "common knowledge" (maybe).
You may not find a true-believer of utiliterianism who can provide a strong argument of the system (or be convinced by your argument).
You came across to me in your first comment as supporting util. and it's implications, then others are pointing out the flaws in the implications (which you are already aware). I'm arguing that your intention has been misunderstood.
We have no reason to believe infinite pain or pleasure actually exists, either in intensity or in duration. If we did, then there would be severe problems with the hedonic calculus, this is true.
“No reason” can still cash out to a very small probability. Then a small probability of infinite days of suffering or bliss will swamp other considerations.
One counterpoint within the utilitarianism system is why is this hypothesis privileged while the opposite hypothesis not? (God worshipers go to hell and vice versa)
A counterpoint outside the system is to say that basic utilitarianism only informs our ethics in a small amount of situations and shouldn’t be applied to the vast majority of other situations.
All this is handled within the regular objections to Pascal's Wager.
I don't understand your final point at all. Large amounts are much closer to small amounts than they are to infinite amounts. You can't draw _any_ such conclusions from an inability to calculate with infinite values. Merely large ones are still perfectly fine.
I'd invite way more than one mugging if I was the kind of person who gives in to those.
I see the problem more in the source of the evidence than in the probability attached to it. In some situations, updating on what someone tells you just isn't a good idea, regardless of what is being said.
On the other hand, if there *verifiably* was a one in a trillion chance I'd end up with eternal torture, and I could nil that chance right now by choosing to die - then perhaps I should.
Thought about it some more, and 3 possible answers came to mind
1. There's zero bayesian evidence for heaven/hell in the mugging attempts.
2. Pure utilitarianism is flawed because it's susceptible to these kinds of deceptions.
3. Giving in is shortsighted; it decreases my overall ability to evaluate possibilityspace and optimally deal with these kinds of gambles. The utilitarian choice is refusing.
I'll give you an even more counter-intuitive suggestion - assuming Christian dogma, a newly baptized baby has just been cleansed of sin, and is guaranteed heaven for the moment at least. Given the size of heavenly rewards and hellish punishments (they don't even have to be infinite, merely enormous), killing newly baptized babies before they have a chance to grow up and sin is _clearly_ in their best interests.
"The greatest good for the greatest number" is something Christians would be on board with, since God is (almost by definition) the greatest good and should be made known to the greatest number possible in order to maximise utility.
Acting as if from a position of universal love, while impossible for a sinful human, seems consistent with Christian doctrine and seems likely to approximate some form of consequentialism.
If you mean hedonic utilitarianism, then yeah, probably not, I don't think any Christians would reduce life to the pursuit of pleasure and avoidance of pain, although that's hardly unique to Christians. Preference utilitarianism is also ruled out since many people have preferences that will bring suffering to themselves and others, but again, that objection is hardly unique to Christians.
Third paragraph on love leading to consequentialism may just be my thoughts as a consequentialist, but I think it's reasonable to say that someone acting out of love would desire the best outcome for other people.
If you're going to define utilitarianism that broadly then just about everyone is a utilitarian.
Is there a moral system (actually taken seriously by someone out there) which is actively opposed to the general idea that it would be nice if the greatest good happened to the greatest number of people?
I wouldn't say opposed, but non-consequentialist systems of morality don't think that the correct way to determine the morality of an action is to consider its effects on people. The conclusions of utilitarianism are very inconvenient (all people matter equally, so self sacrifice for the sake of others is obligatory) so when people realise that they generally oppose the idea that they should be responsible for bringing about the best possible outcome for the world. Cynically, most people's morality concerns getting the best consequences for one person, themself.
I don't think it's unreasonable to define utilitarianism broadly, "utility" can refer to welfare or satisfaction, not just to happiness, and most hedonic utilitarians have very complex ideas of what happiness is anyway.
As C.S. Lewis points out (in the Screwtape Letters, IIRC), God clearly thinks it was worth having some people be born and grow to adulthood, even with the risk of Hell, and He has commanded us not to do the thing you're talking about, so unless you think you're wiser than God (impossible, since you started this premise by assuming Christian doctrine), you're wrong and the only question is how to figure out how.
The set of all existences that include entities that believe themselves to be the Christian God is larger than the set of all existences that include entities which are correct about believing themselves to be the Christian God.
Invoking “impossible” is sneaking your conclusion into your premise.
It may be necessary here to differentiate between believing in Christian doctrine and merely believing in the stated facts of the matter. You can take a utilitarian approach to divine punishments even if the believers think this is the wrong approach. I would not automatically be convinced about the doctrinal issues _even if_ heaven and hell could be empirically demonstrated.
I don't think it works that way in Christian theology. Killing a baby would be a mortal sin, not because of the calculus about the baby's future welfare (as a soul), but more because you are assuming a judging role reserved for God. It's the same argument that prevents the Christian from condoning abortion, even of babies that will die promptly at birth, or have horrible birth defects, and which prevents the Christian from condoning euthanasia, no matter the degree of suffering averted. There is only one God, and only He can properly make such decisions, and anyone who puts himself voluntarily in that position is more or less repeating the sin of Lucifer.
Obviously baby-killing is against Christian doctrine, but that hardly changes things if I'm a utilitarian who has merely become convinced that heaven and hell exist in the matter described. If I can save multiple people from hell, even at the cost of going there myself, that's surely a noble thing from the utilitarian standpoint.
Sure, but your assumption that a utilitarian could exist (or at least that a rational person could adopt that point of view) in a world in which religion had already been proven to be exactly correct -- Heaven and Hell are known to exist, exactly as laid out in the religious tracts -- strikes me as illogical.
So far as I know, utilitarianism is where you get to if you have no direct mandate from God as to what is right or wrong. If the mandate exists beyond cavil, why be utilitarian? Particularly if it goes against the mandate?
It's sort of like imagining a person doing a carefuly Bayesian analysis on the Monty Hall problem in a world where the doors are all already open. Why would anyone do that?
Not sure? I mean, I could imagine becoming convinced that Heaven and Hell are real without starting to adhere to Christian dogma just because of that, and applying a utilitarian framework to the new situation and trying to make the outcomes for people as good as possible. If a dictator is running a torture center and a pleasure palace, sending people there according to how they act, does that mean you have accept the whole thing and do as he says just to get into the pleasure palace instead of the torture center? It might be pragmatic, but it's hardly intellectually sound.
Well, OK, but this is shaping up to be a pretty weird situation. You've got an omniscient and omnipotent Being[1] handing out infinite reward and punishment -- what's left to make it a straight-up monotheistic religious revelation? All you really need to further assume is that the Being has benevolent motives, and you've got a classic Abrahamic religion, brought to hard reality, and why would anyone *not* want to do as he says? He's infinitely wise! Surely following his wise and benevolent orders is going to get you a better outcome than winging it on your own. Consequentialism is dead, because it's pointless, there's *already* an answer to every moral question, derived by a mind infinitely better than your own.
The only way out of this situation it seems to me is to assume the Being is either cruel or indifferent, and perhaps "indifferent" when you're handing out infinite punishment and reward is inherently cruel. I mean, practically speaking, what would be the difference?
So what we've got is sort of an anti-religion: the Universe is ruled by a cruel God. Under those circumstances, is there anything else to do but go mad, so at least you might find refuge in enternal denial of the horrible reality? I mean, if you played by the rules you *might* end up thinking you've got infinite reward, but if God is truly wicked, he could change his mind at any time, send you to infinite punishment just to jerk you around. I mean, he probably would, if he's a mean bastard. You couldn't trust anything he said, so "the rules" are almost certainly meaningless. So once again, consequentialism is dead. There's no point to trying to build a personal or social morality, because the evil God will just make sure it ends in misery anyway, because he's an infinite rat bastard.
This is what I mean by the problem seeming inherently self-contradictory as posed. Once you posit infinite reward and punishment, and a Being who can hand out either, it seems to me the entire scope of moral philosophy becomes void. The only reason we *have* any such thing -- the only reason we can ask ourselves what the best course of action is and have intellectual scope for success as well as error -- is *because* we don't know any such thing[2].
------------
[1] He can hardly hand out *infinite* reward and punishment if he is himself limited in what he knows, or his power. Somebody could escape! In fact, given *infinite* time, it's pretty much guaranteed *everyone* would escape, unless he has perfect knowledge and infinite power to prevent it.
[2] Wait...did I just say Christians (or any other similar faith) don't actually know God/Heaven/Hell exists? Sort of. The way a Christian theologian would put it, I think, explaining our existential uncertainty, is that while we may be certain God exists, we cannot know for sure his Master Plan.[3] So we are still left in the dark -- we still don't know exactly what we should do to fit into The Plan, and so we have scope for the debates of moral philosophy.
[3] And why doesn't God just tell us The Plan? Because (the argument goes), the only way we can *deserve* salvation is if we work some of it out ourselves -- we *must* have scope for moral philosophy and make the right choice of our own free will, because that (we're told) is definitely part of The Plan.
I’ve come to very similar conclusions, and have been personally very terrified of hell as well. Interestingly, eternal conscience punishment doesn’t hold up especially well as a theory of what hell is like when you read the Bible closely. You’ll find that most people don’t actually believe in a tortuous hell just by how they act, as you say we should do everything we can to avoid it, and that’s obviously not the case
It would be really horrible if Christianity was true. I would prefer the existence of Cthulhu to that of Jahve - while it would be horrific to get consumed by Cthulhu, it's nothing personal, and it will be over pretty quickly, unlike with what Jahve would do to me.
There is a Christian belief that believes in annihilation as opposed to a Dante’s Inferno style eternal torture. I also believe this is more backed by the literal Bible than eternal torture.
Absolutely, I think there’s plenty of viable solutions to the problem of hell. I’m just glad I found a place that appreciates the magnitude of what it’s purported to be, in a philosophy course I remember someone pegged the payoff of hell as -50, which is nothing like getting cooked forever
I quote: "Hell must be destroyed" - you'll have to read Unsong to see how that works out in the end.
I honestly find this one of the most disturbing aspects of my own Christian faith, I tend towards the idea that we should read references to hell as "the second death" literally, but many Christians disagree with that so it's not something I can be 100% sure on.
I think the worst thing isn't that Christians think I will go to Hell, but that I *deserve* it and that eternal, obscene torture is only right and proper for the unutterable crime of not worshiping their god.
It may be you have experienced caricatures of Christian theology rather than its genuine self. The eternal torture of Hell is being deprived of Heaven, and the joy of Heaven is being united with God.
That is, there is no torture *deliberately* applied to those who die in mortal sin -- no fire and brimstone, hot pokers, eagles eating your liver et cetera (outside of medieval scare paintings and Sunday school Grimm's tales told by twits). What happens (according to theology) is that after you die all that is right and true is suddenly made as clear as day to your soul, and if you have rejected grace permanently you finally and fully understand that you have chosen to be wrong, alone, wicked, and without the greatest friend anyone could ever have. And it's the knowledge that you have rejected eternal happiness when it was freely offered to you which constitutes the torture.
It's a bit like the conventional movie plot where shallow man rejects wonderful woman who Truly Loves Him Like No Other for the bimbo, and then years later, after WW has moved on and gone out of reach forever, shallow man realizes his mistake and has to live with it.
I was a Christian for 27 years and long thought it strange, the idea I was taught, that if people discounted the idea of God because of evidence against it, then died and found themselves in a Christian afterlife, they would still reject that idea. This was the claim of Church leadership; it was BS. Stranger still would be if God judged you harshly for finding the evidence for (godless) evolution compelling during your life, even after you admit you were wrong given the new evidence. Ultimately I had to admit that a God who would behave this way was immoral and deserved no worship.
1 x infinity and (all of humanity) x infinity = the same amount of infinity, so it wouldn't matter if you saved only one person or all of them. In fact, you could offset an otherwise very large but finite amount of evil by just converting a single person to Christianity by this logic.
This isn't a pathology of utilitarianism. It's a pathology of Christianity. It's basically the justification for Spanish colonialism. It doesn't matter how barbaric we are. Future generations of these savages will now be Christian, and that is worth literally any finite cost, no matter how evil.
Whereas Christian doctrine is somewhat saturated with notions of infinity, utilitarianism is not. This is a solved problem in every actual operational deployment of utilitarian principles in real-world decision-making systems, which work by not allowing infinities.
Note that infinite elements in a sequence, and thus infinite time, doesn't necessarily imply infinite magnitudes. Plenty of infinite sequences sum to a finite total, and if they also have a temporal element, asset valuation, reinforcement learning, decision theory all introduce temporal discounting, which still eliminates infinities. For instance, even with a non-zero probability that hell is real, there is also a non-zero probability that it does not last forever and God may someday decide to let the hell dwellers back in, which means you need to exponentially decay the future suffering in accordance with the probability it will end, and your expected sum of total suffering in this case is no longer infinite.
Yeah, this isn't a reasoning of utilitarians - it's the reasoning of the people who torture suspected witches for their own good, since as long as there's even the tiniest chance they will repent...
I agree this logic has been used for evil, but I think you’re mixing two different points.
If heaven is real, then I would prefer as many people there as possible. Basic utilitarianism saying they’re all equal is a failure of that system, not Christianity.
I say “basic utilitarianism” because you could define the math differently to fit our intuitions better.
I wonder if you could make a transformation of how you calculate - for instance, assigning "one person goes to heaven" a value of +1, "one person goes to hell" a value of -1, and everything else meaning 0 (as it's completely meaningless in comparison). At this point, you could rephrase utilitarianism as "act in such a way as to maximize the net amount of people going to heaven".
Fun consequence: at this point, you will have to condemn God for not just putting everyone into heaven, the way a *moral* person should.
Lets for the moment simplify to a case that is more well understood.
I am using preference utilitarianism with moral beings that are agents (meaning we have really good mathematical understanding of what it means for them to have a "preference").
In particular their preferences will allow us to create a utility function for an individual agent which maps a worldstate (or superposition of worldstates) onto a real number between 0 and 1, in such a way that higher numbers refer to preferred worlds.
Now it becomes clear that infinite utilities are meaningless, they are simply not possible.
When you feel like you have a thing with infinite utility, what you actually mean is that this thing is infinitely more valuable than this other thing, but this is just as well described as the "infinite" option being 1 utils and the reference option being 0!
Now it becomes clear that from a total utilitarian point of view the preferences that feel like they are infinite are not actually special in any way, and thus get no special treatment compared to normal preferences.
Now on the matter of personal rationality, you are completely free to assign utilities to worlds however you like, and if you actually only value on particular family of worlds (eg the worlds where you go to heaven) then feel free to assign them utility 1 and everything else utility 0.
Someone else is obviously free to disagree on your evaluation of preferable worlds.
One problem a Christian (or believer in any other religion, but I speak as a Christian) utilitarian faces is the fact that the Bible doesn't tell us to exclusively focus on prayer and evangelism to the neglect of all else. One solution is to argue that there are multiple things of value in this life and the next, so good done and souls saved are both important, the other approach would be to argue that the best way to optimise for souls saved is a life that glorifies God through both words and actions. Not sure what side I come down on but understanding the overwhelming importance of infinity does give you some sympathy with religious extremists - not sure if that's a productive extension of empathy or the start of a dangerous decent from Christian Effective Altruist to psychotic fundamentalist. Of course, an omnipotent God certainly could force us all to worship him, so the fact that he doesn't (assuming he exists) would suggest that we have good reasons to seek voluntary conversions rather than using coercion or indoctrination.
I don't feel that we ought to include infinity. It intuitively feels to me that it makes much more sense to think of heaven and hell as being 'the maximum joy/suffering it is possible to design a system to experience," which is so vastly greater than we can comprehend that it might be worth treating it as infinitely greater than everything in our mundane life, but doesn't break mathematics.
But other than that... all this seems 100% logical, even with that. Pascal's Wager is good logic. The best argument I've found against it is the 'uniquely privileged position' - if we imagine 10,000 people and one god, only the god has infinite power over the 10,000 souls, but, also, the odds that whoever is speaking to you is God is 1 in 10,001. But even with these two cases, this still suggests we ought to treat the promises of heaven and hell as being of far greater importance than all earthly things.
I dunno. As a utilitarian, the argument still makes sense to me. I just haven't yielded out of the feeling that I'm being conned, even if I can't figure out how.
Say what you will about Roman Catholic dogma, you have to admit that infinite reward/punishment idea has got to one of the most effective memes of all time.
I’d argue that the power gets its strength from the infinite punishment part. Think of the millions of minds that have been ‘hijacked’ by the fear of the rod that power wields.
You definitely do need to look into the history more, you seem to think the church spread solely through the threat of force, but that only works after you've persuaded large numbers of people that you're correct (through the promise of both spiritual and material benefits).
I should have mentioned that I was using the term meme in the pre Internet sense. A self replicating ‘organism’ in the ‘ideasphere’. As described by Douglas Hoffsteder in “Metamagical Themas”.
Utilitarianism only makes sense in an information abundant environment. If you are assigning the value of infinity to one of your variables, maybe you are not in an information abundant environment.
In the *absence* of heaven or hell, i.e. a strictly religiously-derived eternal moral punishment or reward regime, then there can be no infinities, given that life (and the population of the Earth) is finite.
And if you *do* live in a world in which the existence of religiously-derived eternal moral punishment or reward is known to exist, how can you be a utilitarian? Everyone would be 100% deontologist.
For any system with infinite utilities, there's a better system without infinity.
One problem with infinity is that infinity times anything is still infinity. If there's a 1% chance dying now will send you to heaven, and a 99% chance living a long saintly life will send you to heaven, these options both have the same utility, and so you're indifferent between them.
As another example, if you include both positive and negative infinity, you get mathematical problems with how to add them together. E.g. if you're 99% sure going to church will help you get into heaven, but there's a 1% chance the church is actually run by Satan and will help you get into hell, you have an "infinity minus infinity" calculation.
These can be resolved by having a multidimensional utility function: utility is measured by two finite numbers (secular, divine), with any amount of divine utility being preferred over any amount of secular utility. This makes divine utility play the role of infinity. Now, if you're 99% going to church grants you +1 divine utility, and 1% sure it's -1 divine utility, you'll go to church, regardless its cost to your secular utility. Though logically consistent, this theory will essentially ignore secular utility, and raises the question why you bothered using multidimensional utility anyway.
ok, so no one has made the mathematical counterpoint I use, so I guess I'll say it. First off, I do think that our moment-to-moment enjoyment is bounded. I've never heard of anyone being infinitely happy over any open interval of time, so I'm going to assume it's bounded; let's normalize the maximum at 1 util per second just for the sake of things. This seems pretty consistent with most descriptions of heaven too; it's usually "you're doing super cool stuff for all eternity and Jesus is there which makes it even better."
This still leads to the question though of "can't we generate infinite utility if you're getting 1 util/second forever." And the answer is "sort of, but only if you have a poorly defined utility function"
This is where exponential time decay becomes important. This is a commonly used economic concept. Take, for example, the question of "why don't you stick all of your money in a super safe bond, sort of starve yourself for now, and then enjoy the 1% more next year?" Generally the answer is that we devalue future goods at some percentage. So maybe I'm indifferent between you giving me 100 apples now and 105 apples one year from now. It's an Econ 101 concept, so I'm just bringing this up so that it's clear what follows isn't "special pleading."
So assumedly you can guess where I'm going with this. You can just apply an exponential decay to your time function, (let's say 5% per year) is going to be (sum over (0.95)^n)*(seconds in a year) = 631139040 utils. This is a lot of utils, but if you're sufficiently dubious of heaven, you can multiply that by a probability and get a number that's less than torturing someone.
Also, if you believe a fire and brimstone christianity, the end of the world is coming *soon* so you don't need to stress infinite populations and appropriate discount rates for that.
Is a finite mind even capable of experiencing infinite torture? It can only have a finite amount of discernible mind states, and each mind state should probably only be counted once in the utilitarian calculus.
Perhaps God gradually expands the sinners' minds to make them perpetually open to new experiences of suffering.
I have chronic pain. If it's hell I certainly wouldn't have it replaced by the groveling degradation Christians seem to recommend. If I were to imagine living an infinite amount of time with this pain as a thought experiment to make it more theologically hell like, I still wouldn't take the deal. But crank up the pain enough, inquisitor and I might change my mind if I could. The inquisition was fond of showing its victims the instruments of torture. Christianity has this aspect of the Inquisition built in when it starts talking about hell.
I'm a hedonic utilitarian, but also a materialist. There's no way for a finite material brain to experience infinite good feelings. The problem can't arise.
"The inclusion of infinity into" any system of organized thought "leads to weird counter-intuitive conclusions." I seem to recall the issue (infinity) being so inchoate that it drove
some late 19th century mathematicians insane. However, I'm pretty sure that if
we'd paid more attention to harmonic convergence in Complex Analysis 302
all these issues would be resolvable by various orders of infinity. Which is another
way of saying -- don't place any naive certitude onto conclusions you have drawn
regarding a simpleminded calculation that involves infinity unless you have a competent mathematician holding your hand.
Well you could solve that problem by starting a new psychiatric practice under a false name. Maybe wear a false mustache and dye your hair also if photos of (the real) you have emerged.
"No, sorry, I'm not *that* Scott Alexander. Kind of wish I was, he seems like a smart guy, and I'm flattered you think I might be he. But no I'm a much less interesting person who just works at head-shrinking to pay the bills."
As far as I know, there are 2 main reasons why psychiatrists want to be anonymous online.
First as fairly obvious, and applies to most white-collar jobs in general. You don't want to cause problems for your employer, and personal opinions can get very problematic.
It's not a problem anymore in Scott's case, since he is effectively self-employed now.
The second problem is closer related to the specifics of the area - psychiatrists don't want their interactions with patients to be personal, as it can be distracting to both parties (there are more complicated issues related to that, but let's leave it at that for now).
Scott fixes that by not taking ACX readers as patients - and that's why he asks other medical professionals to refer people to him directly.
At least that's how I understand it based on his previous posts on that topic.
From other friends that are therapists, the reason they strive for anonymity on the Internet is so that patients can't find out anything about them and thus from preconceptions that might prevent a treatment from working. It isn't required that a patient already be an ACX reader for that to happen, just that they are able to discover Scott is the writer and then take a look.
I understood that is why Scott is focusing on the pharmaceutical side of psychiatry so such concerns are lessened - I think he said that in one of his early ACX posts
More generally, psychiatrists and clinical psychologists becoming famous (well, internet famous) is not a problem unique to Scott, although I guess the typical solution is to pivot from practicing psychiatry/psychology to becoming a public figure, author or public speaker.
I believe Jordan Peterson still has his practice, and of course Oliver Sacks wrote a dozen or more books about his patients (with I'm sure the relevant identification details changed) throughout his career. Both of those psychiatrists are significantly more famous than Scott. Scott hasn't even written a best-seller yet.
Jordan Peterson is a clinical psychologist, and Oliver Sacks was a neurologist, so neither was a psychiatrist, in case that makes a difference, say in terms of their professional code of ethics. But yes, both of them had to deal with having patients at the same time as being famous.
I've been practicing for the past four months. I'm not taking blog readers as patients, and as far as I know the vast majority aren't aware of my blogging. I'm not doing more than the minimum possible amount of therapy.
Scott, have you looked into using Polymarket recently? I wrote a guide on the subreddit explaining how to use it cheaply:
Prediction Markets is something that is discussed by Scott quite frequently, so I thought I would write a quick guide on how to use Polymarket cheaply.
Polymarket (https://polymarket.com/) is a cryptocurrency based prediction market running on the Ethereum blockchain. Even though it is crypto based, you can only buy shares on USDC, a stablecoin pegged to the US dollar.
Because of high gas fees, it was somewhat costly to deposit small amounts of USDC into Polymarket. However recently Polymarket opened the opportunity to deposit USDC via Polygon, a Level 2 solution on the Ethereum protocol. That makes transfers into Polymarket essentially free.
Once you transfer assets into Polygon, you can use QuickSwap (https://quickswap.exchange/#/swap) to swap whatever amount into USDC to then deposit to Polymarket.
How to borrow USDC to deposit on Polymarket
NOTE: I do not advice you to do this. If you're playing with prediction markets, do it with money you can afford to lose
I use this system because I want to keep my exposure to certain cryptocurrencies while certain markets settle.
Once you move assets to Polygon, you can then deposit them into AAVE (https://app.aave.com/markets) , and earn interest while you wait to for the right market to buy shares for. Once you are ready, you can then use those assets as collateral to borrow USDC, and send it to Polymarket.
Also, you can suggest markets in the Polymarket Discord. Using the steps above, I found it cheap and intuitive to start playing with Polymarket's prediction markets.
They have different websites and communities, and maybe making another copy of a token might attract a more high-quality community or something.
(Also, it's not strictly necessary to use money when posting predictions. E.g. Metaculus and PredictionBook both run on Fake Internet Points, yet both make for very-high quality predictions and discussions.)
My view on that is "don't let the perfect be the enemy of the good". True contrition is best, but if attrition is all you can get, then it's something at least. https://www.newadvent.org/cathen/02065a.htm
Turning this into yet another "my camp versus your camp" polarisation is not going to be fruitful, I think.
I read a 1990 paper about the 'psychoticism' personality factor and it mentioned cross-assortative mating, or the idea that certain traits (and especially mental illness risk factors) are heritable because e.g. schizophrenic women are more likely to be impregnated by psychopathic men. It's a consideration that hadn't even crossed my mind before but it makes a certain amount of sense given how an abused woman is much more likely to end up with an abusive spouse, and that should have predictable effects on their offspring.
Can anyone comment on this? Is this something that's borne out by research, or is the evidence against it? Or has it simply never been studied?
Yeah, I'm actually trying to integrate the p factor research into my current study, and I was looking at correlates of the "thought disorder" theory that's a candidate of being the underlying cause of the p factor. Cross-assortative mating is pretty unrelated but it struck me as interesting.
I think I should start off with some general apologies:
(1) For diverting the sub-thread on the book review post into "All Tolkien, All The Time". Sorry about that! But you know, when the Spirit moves you...
(2) Apologise to benwave. I am of a querulous disposition and I was too harsh and antagonistic about a minor matter, viz. the most acceptable way to spell Maori in English. If I am telling them they may suggest but cannnot compel people to do what they wish, I should remember the same applies to me as well. I apologise, benwave.
I think of topic drift as normal fannish and possibly rationalist behavior. It may have something to do with ADD or possibly just being interested in a lot of things.
If people had more they wanted to say about the Arabian nights they could have said it.
I may be more tolerant because I'm interested in Tolkien, too. I'm not going to mention possible topics I wouldn't have been interested in because Don't Invoke.
I've also been trying to think of a post that Scott either wrote or (more probably) linked to. It had a title like "How to create a state without even trying" and it described how the self-interested and rational decisions of people in an anarchic pre-agrarian society very naturally led towards the birth of the centrally-governed feudal society with taxes and policing. Anyone have any idea what I'm talking about?
It sounds exactly like the argument Nozick makes in Anarchy, State, and Utopia, so I was trying to search on those words, but it's not popping up for me.
This is correct, but the style of argument of the piece I'm thinking of is very similar to that used in this book. It just doesn't concern modern society.
My life became better when I realized complaining about complaining is still complaining.
I used to just crumple internally when I'd see someone write about how people just aren't tough enough. I'm not particularly tough, I don't know how to be different, and I'm obviously just inferior.
Then I realized I was seeing material by fragile spirits who just can't take it when they hear complaints. While I grant that listening to complaints can be wearing-- especially if they're other people's complaints rather than one's own-- it still seems like complaining about other people's lack of stoicism is a little much.
Yes, this is something I realized just within the last year that has made my life a lot better.
Change whiny behavior by modeling not whining in various ways, not by whining about it. Different ways of modeling it can be effective for different types of whiners.
I see this behavior coming from the men on my father's side of the family a lot. They're a bunch of tough guys who like to complain about behavior they see as weak or fearful. And they're good at appearing tough- a lot of people fear them. But the truth is, witnessing other people's fear and sorrow distresses them profoundly. So much that they're constantly guarding themselves by signaling that they won't tolerate it, whether they're complaining about a specific person's behavior, a subculture, or a whole social movement. At the end of the day, they are the ones who don't know how to experience those emotions and not be torn apart by them. Their apparent toughness comes from their ability to cage their sorrow and fear without ever resolving them. Over time, it eats them alive.
I see so much of myself in this comment. My wife suffers from anxiety (and some depression) and every time she has an episode I just want to run from the room or tell her to stop it. And I hate that about myself. I, on the other hand, refuse to show when I’m having a rough time emotionally and can’t bring myself to ask for help when I’m overwhelmed. On the outside I look like one of the most stoic people I know, but that’s only because, to borrow your phrase, witnessing other people’s fear and sorrow distresses me profoundly. And I know that my response is to signal, through body language and other subtle cues that I don’t hide well, that I won’t tolerate it.
I want to offer a few words of encouragement. I've seen my father come a long way in this area while his brothers stagnated. I think it's because he escaped that environment where showing too much fear or sorrow puts you in a position of higher risk that someone will come along, decide you're weak, and try to take things from you. It takes a long time to adapt to a life where that's not a risk, but it's possible.
I used to take after the men on my dad's side a little, and while I tried to show sympathy for other people's fear and sorrow, it disturbed me to witness anyone really lose their shit (hence understanding how this feels from the inside). I found it helpful to remind myself that it takes strength to be with someone in their time of need. When a loved one is falling apart, that's an opportunity to show some strength by being present and offering comfort, even if you can't solve any of their problems. I guess that's the hard part- feeling powerless in the face of another person's seemingly irreconcilable emotions- but simply showing compassion and facing the fear with them goes a long way toward ameliorating those episodes of intense emotion until the storm blows over.
However, on closer inspection, is there really a difference between real money and play money? Or is it just about how serious people take it? By extension, if we see life and the universe as something very serious or play, isn't that also just a matter of perspective?
Imagine two adults walking in the street, A and B. A is barely functional, in fact, he's incontinent and shits himself. B dislikes the smell, and makes a derogatory remark.
At this point, would you say that B is incapable of dealing with weakness and it eats him alive, and thus B only has apparent toughness, and in fact, B is the weak guy?
To me, that seems obviously ridiculous. Just because B complains of the weakness of A, doesn't mean that 1) B is incapable of dealing with emotion and therefore exhibiting weakness himself, or 2) A is actually the strong person here, because they accept their weakness.
Yes, I took it to an extreme, but still - if your logic was correct, it wouldn't break.
I wouldn't say that, and no logic contained in my comment points to it by necessity. My comment is not a model intended for broad application to every example of someone who complains about another person's weakness or fear. It is a particular case of the "tough guy act" serving as a shield, and I think provides a useful perspective on why some people habitually complain about the fear or weakness they see in others. There are lots of people in the world. Sometimes they do similar things for different reasons.
This seems like a reasonable and normal proviso for social gatherings of this sort, given the current conditions.
What if the vaccine had different exclusion criteria? Would we describe the "vaccines please" differently? Would we be pressured to not have such gatherings at all?
It would matter what percentage of people were medically excluded from vaccination. It might also, for cultural-social norms reasons, matter what categories were excluded.
What if people who recently got a joint replacement were excluded from the vaccine, and that was 0.5% of the population? What if benign physical abnormalities of the ribcage excluded you from the vaccine, 3% of the population? What if pregnancy was an exclusion, and that was 0.5% of the population? What if there were a genetic marker: 15% of people have the marker, 10% of them get a very severe reaction to the vaccine, so they're excluded as a whole group, but only 40% of people know their genetic marker status?
(fully vaccinated people only, please!) It seems like an unreasonable and abnormal proviso tbh, and shows that the rational community is as irrational and scared as the general population. Government and media fearmongering works.
Why? Vaccinated people are at near-zero risk (probably less risk than that imposed by the travel to the event) from un-vaccinated people. And un-vaccinated people who attend are knowingly exposing themselves to risk. Thus, the only people who are at risk are those who are choosing, for themselves, to experience that risk. Why is that an unacceptable state of affairs? People choose to participate in risky activities all the time.
I don't buy that there is a significant portion of the population that is both at serious risk _and_ unable to be vaccinated. Most unvaccinated people at this point are either extremely low risk (young people) or are choosing not to be.
This is true of all diseases- transplant recipients have to take immunosuppressant drugs the rest of their life, which makes them vulnerable to literally everything your or my immune systems could easily fight off. Does this mean we can't gather anywhere indoors ever because we might have a cold that could be really bad for them? No. But it means that we should vaccinate people against diseases that could be only sucky for you and me but potentially fatal to them, and that we should not gather indoors while unvaccinated while there is a disease out there that there immune systems have never seen before.
This is a really small percent of the population we're talking about, but also a small sacrifice on the part of you or I. Vaccines are available if we want them. Of course there are people out there that will never get vaccines, but thankfully a) herd immunity isn't binary, the more people with immunity the harder it is to spread; b) people who have been infected also have some immunity; c) waiting also allows transplant patient's doctors to collect more data, potentially figure out an optimal dosing strategy for booster shots/timing of shots around immunosuppressants/best treatments for this subset.
This above is about transplant patients because I know the most about them, but it applies in varying degrees to other immunocompromised populations as well
Well, if I were organizing such an event, I might not be thrilled by the idea of un-vaccinated people knowingly exposing themselves to risk _in my house_. And even if my vaccinated friends and me were not in any real risk, and if the people engaging in risky behavior did so knowingly... if anything ended up happening, the event organizer would be responsible, to some degree. I would certainly feel responsible, in such a situation - and that is why I would apply similar measures, if I were in the role of the organizer.
Now, if you feel differently about the risk analysis, and you would like to host your own event, I'm not stopping you.
And, to be fair, this all depends on the size of the event, the expected number of non-vaccinated guests, and the current virus incidence rate in your area.
The organizers would _absolutely_ not be legally liable and I don't think they would be ethically liable either. And in any case, I think you are over-estimating the risk here. It's not allowing base jumping. I would be very surprised if the median age of these meetups wasn't below 40.
I just think that, with vaccine availability at where it is right now, the societal risks of meetups are relatively low (certainly lower than lots of other activities we have no issue with). To the point of the original reply: yes these things have risk, but the reaction to them is completely out of proportion to the way we act around similar risk levels. It's one thing to just be generally risk averse, but I guarantee you that individuals involved with the organization of this event have undertaken similarly risky activities on a regular basis with less thought or concern.
I'm not an anti-masker or covid denier. I got the vaccine as early as I possibly could, and still wear a mask without complaint for reasons of social signalling, but I still think that the way people think about COVID risks in general is completely irrational. And for a group that prides itself on rationality, it's a bit incongruous.
A) You're right about the median, at least for the usual people attending the meetup in question. However, one of the hosts is 76, and this does affect our risk tolerance. (A few guests are also much older, but I'm more comfortable assuming they won't come if they don't feel safe. However, we are setting our safety levels on the assumption that Dad *is* attending.)
B) Pushing back gently/honestly curious about "but I guarantee you that individuals involved with the organization of this event have undertaken similarly risky activities on a regular basis with less thought or concern." Not having done the math but knowing our usual habits, my quick guess would be "true, but only for things we value very highly." Most of us tend to be pretty risk averse in general, and in this case the more risk averse people are setting the limits.
But if you have done the math, I'd be curious what you think is about this risky that we are likely to do regularly. All I can think of is driving and flying, both of which gate some pretty important stuff.
(Not commenting on the overall discussion - I've done that elsewhere - but wanted to respond to those specific points.)
Well, that would work if people were different. But as it is....an unvaccinated person invited to go to an event, where he hobnobs with other unvaccinated people who may be carrying the disease, is not only exposing himself *but also* is family, friends, and anyone else with whom he'll come in contact back home.
Were it the case that *those* people would not resent the event organizer for encouraging something that exposed them more than they would otherwise be to the disease, then the organizer could rest easy.
1. I would not agree about near-zero risk. Highly reduced risk yes, especially of dying, but there's a not-insignificant risk of getting infected, and I don't think we know to what extent the vaccine protects against long Covid?
2. Unvaccinated people risk infecting each other in the regular manner. Surely you don't want this to happen?
Precisely. It's becoming very difficult to understand exactly what western gouvernements wants to achieve. At first it was avoiding hospital saturation, then protecting the elderly and other high risk people, but now it's apparently reaching herd immunity through vaccination. Not realistic, and i can not see what practical advantage it would have. Virus will not be eradicated as there are multiple animal reservoirs. The high target means vaccinating children or very young adults, who are not at serious risk anyway. It looks more like a political goal instead of a reasonable health measure.. .
OK, that just proves the general point. If your vaccine works, you shouldn't be worried; if your vaccine doesn't work, then it doesn't matter if those round you have been vaccinated. The rational community is behaving irrationally.
Agreed. You know... it’s a funny thing. Naming themselves “Rationalists”undermines the whole rational project. Communities already suffer from group think. But a community that labels itself rational (and therefore everyone else irrational) is really setting itself up for failure.
It proves nothing. There is a risk to yourself and others if you attend unvaccinated, increasing with fraction who are unvaccinated (or partially-vaccinated). The risk for small fractions is low, but establishing a low but nonzero limit is logistically difficult and what fraction is acceptable is highly debatable. The only stable Schelling point is 0. This also has the benefit that if a few people disregard it - it's a request rather than an enforced demand - that remains safe.
Let's accept your argument. Non-vaccinated people pose a fractional risk to others. Although the risk is fractional, it compounds into a large risk once everyone gathers together. Because risks are best avoided, non-vaccinated people should not attend the meetup.
Very well. Let's keep going. Vaccinated people can also become infected. We know this because there are a cases of vaccinated people becoming reinfected. An infected person poses a fractional risk to others. Although the risk is fractional, it compounds into a large risk once everyone gathers together. Because risks are best avoided, vaccinated people should also not attend the meetup.
The conclusion is that no one should attend the meetup. But the conclusion is clearly absurd. Reasonable people do not try to reduce risks to zero.
None of this even mentions what "the risk" is. Which, is equally important. The risk to a vaccinated person is that they might experience a mild cold. Much different than drowning in lung fluid.
I hear you. I worked at a COVID recovery center for homeless over the summer and never got sick, because I wore a good mask and we practiced social distancing as much as possible. Now I'm fully vaccinated, and I know that the risk to myself of severe disease or death from COVID (which was already extremely low, much lower than the risk of death from the amount of driving I do in a week or two) is now basically non-existent. However, I totally support event organizers requiring vaccination for entrance, especially if it's an indoor event. That's not because those people pose a risk to me, but because the continued community spread of COVID leads to deaths and suffering- some people can't get vaccinated, or don't post an immune response to the shots, ie transplant patients. Once the probability of spread is low enough/prevalence of vaccines is high enough/effective vaccination strategy or treatments are available to immunocompromised people, I don't care what unvaccinated people do, but until then, requiring vaccination is an incentive for people to get vaccinated even when there risk is basically nill, and a signal that society is valuing vaccinations in people.
You are completely missing the point. The *Schelling* point.
You're also treating risk as binary, when it is not. An order of magnitude of difference in risk is meaningful and has practical consequences for what is a prudent course of action.
That is not entirely true. You are overlooking a major public health issue, which is the emergence of variants (that may defy existing natural or vaccine-derived immunity). Viruses, unlike bacteria, can only mutate during an active infection. So the speed with which variants will emerge -- variants which, again I will emphasize *could* evade vaccine or natural-infection derived immunity -- is determined by the number of active infections that are going on, even if those infections are mild -- do not threaten the life of the infected, or are even asymptomatic.
That is, every unvaccinated person is an invitation to the virus to come try its luck at evolving, a throw of the dice humanity makes to see whether *this* time the virus will turn into something new and nasty.
So there really is a John Donne sense in which the choice to be unvaccinate might impose future costs to even the vaccinated. Indeed, it necessarily imposes even present costs, in the sense that it necessarily raises the probablity of a deadly variant emerging, and a sensible polity would take precautions against that -- research into new vaccines, for example.
1) if the coronavirus is not eradicated, but more people are vaccinated, the probability of a new variant emerging per day/week/month goes down. It's not binary, where it can't evolve with no hosts or it can evolve with some hosts, it's more of a spectrum, with the less hosts available the less likely it is to evolve new variants
Is he? Your argument assumes that if everyone was vaccinated today that the corona virus would be eradicated. But while that may be true of viruses like the Measles, it's not true of Corona viruses. Covid-19 vaccinations confer short lived immunity. Reinfections will be common even in those who have been vaccinated. More transmissible variants are going evolve. But, it's rare for variants to become more virulent. And fortunately, the symptoms of those who are reinfected are mild.
So far as I know some of your key assumptions are, at best, unproved and some of the logic is inappropriate. For example, that vaccination confers brief immunity. Never seen that demonstrated, and what little data I have seen seem to suggest the opposite.
And certainly it's unusual for new varients to become more virulent, but you know, those rare events are exactly where new pandemic disease comes from in the first place. Clearly SARS-CoV-2 *did* evolve to become much more virulent some time in the recent past. That's why it's a problem now. When it comes to problems that have the potential for exponential growth (like pandemics), you cannot ignore the Black Swan events.
There are animal reservoir anyway (both wild and domestic) so it will never be eradicated. Now that the vulnerable people got a shot and soon two, the risk of covid becomes less than a typical flu, and i hope people will remember how much they cared about flu before end 2019 (i know, 2 years, a different era...the answer is: not much). Why is it different now, apart as tribe signaling?
"If your vaccine works, you shouldn't be worried; if your vaccine doesn't work, then it doesn't matter if those round you have been vaccinated."
Whether the vaccine works is not binary. It can work very well, but not perfectly, in which case you get to decide whether (in this case) "if you would otherwise have gotten it, roll a d20; on a natural 1, you still get Covid" is good enough. (I am simplifying outrageously, this is true on a population level but surely not on a personal one, but I'm not sure we know enough to tease out the personal-level risk yet.) If it isn't, you may want to require two natural ones (on people who would otherwise have gotten it - you and the person potentially transmitting to you) to get it - which is what caring whether the person you are considering interacting with has been vaccinated is.
We wouldn’t want to see unvaccinated people permanently excluded from meetups—one of the things I most value about meetups is the diversity of attendees, which includes varying beliefs about COVID. I’ve seen people everywhere in this range, some wanting in-person meetups ASAP and some on the other extreme of no in-person events until vaccination. Not surprised that the Bay Area folks are on the super-cautious side, but hopefully even they will open up to unvaccinated people in time.
A) I agree with you about valuing intellectual diversity. This is a personal risk tolerance calculation, not an ideological purity one.
B) From my perspective, the level of Covid concern under which it is a good idea to hold large indoor events but request only vaccinated people attend > the level of Covid concern under which it is a good idea to hold large indoor events and welcome everyone. When we reach the latter level of Covid concern, we'll drop the requirement. I fully recognize that many, maybe even most people here, think we've already reached the second level, but we (the specific people hosting) are for various reasons not very tolerant of this specific risk. We're willing to host a large indoor gathering, but not (yet) to do so without worrying about vaccination. Give it another month or three (depending on case numbers - IMO, county case rate will hit zero/day in another few weeks and it will all stop mattering, but I'm not exactly a superforecaster). I have no intention of criticizing anyone who is inclined to host under different rules, but I'm not presently willing to.
So yeah, pretty much what you said, but with a bit of emphasis on "in time" ie "give it a bit".
This. More specifically, is there some reason why a vaccine would be desirable or necessary in the case of a person who has already had COVID and has a positive antigen test?
It's been weirding me out that somehow the conversation has morphed into vaccinated vs. non-vaccinated, not antigens vs. non-antigen-possessing. Is there some point of vaccines beyond developing antigens? Why would someone decide to take a vaccine with a non-zero risk attached of a severe reaction if they already demonstrably have antigens?
I agree, except I didn't know this word "antigen". I hereby invoke the Sapir–Whorf hypothesis.
I'm pretty sure I got Covid, but I didn't bother confirming with a test (my wife tested positive, then I got sick in quarantine several days after my negative Covid test.) I've therefore put off vaccination to allow others to have the opportunity first. Should I be allowed into the party? Trick question, I can't go to any ACX meetups as I'm not in California.
Because if you say "antigen-possessing," even in a space like ACX, a lot of people are gonna say "Huh?"
David says he's already answered this, but I will too - immunity is immunity is immunity. My personal suggestion would be not to assume you're immune if your Covid case was 10+ months ago and your test also months ago, because I personally know one person who got it twice, about ten months apart, so I'd worry about immunity waning. But "immunity, or good reason to believe you have immunity, ideally at around the level given by the vaccine (which I assume is what natural immunity gives you)" is the actual request.
David Piepgrass, I hope this answers your question for you, if you were in California!
My guess is the #1 reason would be clinical uncertainty. That is, *did* you actually have COVID, and did you have it long enough/serious enough to develop a solid immunity? The diagnostics are not so infallible that those questions can be answered with 100% certainty, so just to be on the safe side, you might take the vaccine.
It's also conceivable that the vaccine improves immunity over what infection provides. There are situations where that can plausibly occur, e.g. the original infection was very mild and/or mostly snuffed out by the "innate" immune system, so that you did not develop much of an "acquired" (antibody-mediated) imunity. Vaccination could in principle provide valuable boost to acquired immunity, more or less because vaccination would simulate a much more severe infection.
I have issue with "long enough to develop serious immunity". If you got asymptomatic and /or short covid, what kept it mild in the first place? An efficient immune response, for whatever reason (overall better immune system, cross immunity through pervious corona infection, virus entry more difficult in your cells,...) If anything, it should indicate less risk for future infection than a hard case, regardless of antigen level. IMHO the issue is to be really sure you got infected in the first place, and is the reason why so many countries insist on at least one shot for people who already got it (in mine it's the full two shots, not surprising given how well covid was dealed with in my country (Belgium)). It's just easier not to bother pre-testing... Antigen testing was done very little during this crisis anyway. I see three possible reasons: expensive, not reliable or it was not in interest of gouvernement control that people knew of they had it or not
Nobody knows (why some people get a very mild case of COVID and others don't). That's one of the big mysteries of this bug, and will generate a lot of interesting research in the future. So the argument that you should get a shot anyway is just covering all the bases, just being cautious.
And considering the extremely small cost (and risk) of a vaccination, versus the orders of magnitude greater risk of COVID itself, I'm baffled why anyone other than certain very special categories of people even think twice about it. Do you also hesitate to get an update to your tetanus vaccine if you step on something sharp in your suburban backyard not within 100 miles of a cow, so that your odds of actually contracting tetanus are teeny? I mean, why bother? It's a waste of glucose to task the neurons with working it out.
And from the public health viewpoint, it's worth remembering the potential negatives of *not* warning people to be careful hugely outweigh the potential negative of being too cautious. People will grumble about the latter, but in the case of the former they're capable of hanging you from a lamppost. So it doesn't surpirse me at all that those people are super duper cautious.
I haven't seen this point yet in the other comments but apologies if I'm duplicating someone else's thoughts.
My sense is that we're currently in an in-between state where enough of the population is vaccinated that it's viable to have events with a "fully vaccinated people only" standard but not enough to have achieved anything resembling herd immunity.
Without herd immunity there's still the potential for exponential growth. In an exponential growth regime, each new case not only infects R>1 people in the first generation but R^2 in the second, R^3 in the third, etc. until a lockdown or other intervention occurs. So even if each person at the event is either safe or consenting, the majority of the risk is borne by people not at the event.
Once herd immunity is achieved (or exponential growth is otherwise prevented) the calculus changed dramatically. Since new cases will decay rather than grow exponentially, the risk level is approximately limited to the attendees. With risk limited to people who've given their consent, I wouldn't see the need to limit unvaccinated people from attending.
All this thinking is of course contingent on the specifics of the current situation. If only 40% of people knew their genetic status and as a result only 40% could be vaccinated then we wouldn't credibly be able to expect herd immunity in the coming months, and a different policy would likely be better. But given the current situation (no herd immunity, large number of vaccinations, herd immunity possibly on the horizon) this policy seems coherent at a minimum and IMO likely correct.
I don't think the numbers bear that out. California is currently at 54.4% partially vaccinated and 43.6% fully vaccinated. I think the lowest credible estimate for prior infection is ~20%. Assuming 95% immunity for full vaccination or prior infection, 70% for partial vaccination and zero correlation between vaccination and prior infection, that's 59.2% effective immunity, R0 for baseline COVID was ~2.5; the variants are more infectious but not by such a large margin as first reported, more like a 20% increase for R0 = 3.0. Which means effective R value of 1.22 even if we assume a completely homogenous population with zero ongoing COVID-avoidance behaviors, both of which are wrong.
So, with quadruple worst case assumptions, we could tease out barely-exponential growth that would be outpaced by ongoing vaccination and even then would top out at <8% of the population infected before full herd immunity. With only three out of four worst-case assumptions, I don't think you get exponential growth at all.
Our host being high-risk on account of age, it is perhaps reasonable for him to not volunteer for an enhanced risk of infection even if he is mathematically confident that his death would be a singular event and not the beginning of an exponential chain, of course. But as a matter of broader social policy, "we can't afford to do this because exponential growth is still on the table", is probably not really on the table.
I super appreciate the analytical response! That's a really good analysis and I was definitely overestimating the risk of exponential spread. There are still two points of concern in my mind but I'm much less confident on them than I am my original point.
First, even a R value of 1.22 is a doubling every three generations. That's about a doubling every two weeks which while not awful probably isn't great. And 8% of the population isn't huge, but would still represent a 1/3 to 1/2 increase in Covid cases which isn't great (that's very much an extreme worst case scenario though). Both of those things are still dramatically different from the "unbounded exponential growth" view I had in my head.
Second, from a policy point of view I can see the validity of "wait until herd immunity" as a Schelling point for large events with unvaccinated people both to create incentives to get vaccinated and to help with coordination. That said I'm not a policy expert in the slightest and haven't seen the kind of leadership on a local/state/national level that would suggest that that kind of policy is really creating incentives.
Really conversation analysis and thanks for doing the analysis!
>Second, from a policy point of view I can see the validity of "wait until herd immunity"
What makes you think we haven't already reached herd immunity? What is the observable datum that you are waiting for that signifies "OK, now we have reached herd immunity"?
My analysis had to make every possible worst-case assumption to get an R value of 1.22, so the actual R value has *probably* already dropped below zero. And of course observed Covid cases and deaths have been in a roughly exponential decline for several months, which is about what you'd expect once the threshold for herd immunity has been crossed.
That’s a really interesting point and I really appreciate the chance to think carefully about what I actually mean when I think about herd immunity.
My first instinct would be to pick either a vaccination or vaccination + natural immunity such that the R value is < 1 even without any changes in behavior. Based on the numbers we’ve been discussing it seems like that’d be around when the people who are currently partially vaccinated become fully vaccinated.
That said I do see the value in defining her immunity based on the number of cases rather than the number of vaccines.
In general my intuition is the possible downsides of another major covid wave justifies hanging on the last few weeks to hit “full herd immunity” but also see how other data (case counts, etc) suggest we might be there already.
This is the problem Germany is having (and trying to solve awkwardly by generalizing "vaccinated" to "vaccinated, recovered or tested", but not quite succeeding because the resulting pressure on test centers has made official tests hard to get). I don't know how I'd solve it for serious events, but private social gatherings to me seem sufficiently low-scale that it's perfectly fine to go "if you don't like it, find another party".
I'm not all that familiar with psychiatric ethics so I won't comment on "people who read the blog." But why don't you just let people apply and then waitlist the extras? Or refer them to other practices you trust. That would allow you to fill up your practice more easily and provide more value to your target market. If you get overwhelmed you can either hire other psychologists or you can hire a secretary type to handle referring the ones you reject (which you'll be able to afford if you're that swamped with clients).
Restricting supply is a bad move if you want to avoid getting overwhelmed. It's a good move if you're trying to make Lorien the Gucci Bag of psychology. But that seems like the opposite of what you want.
For the book reviews contest, not to tip the scales, but I've find myself fatigued from the format and disliking the more recent set of reviews and recalling the first handful of reviews kindly (but only vaguely).
Are others feeling fatigued from the large tranche? Things like:
- the first time someone links to SSC/ACX with a wink-nudge to the judge-author, it's funny and cheeky. After that, it feels like pandering but maybe appropriate context, so hard to evaluate
- reviews that are too long held my attention earlier in the tranche, but are now quickly disfavored
Not sure how you measure bias towards an idea from the inside separately from the idea just being correct or not. For myself before reading the review I wasn't deeply familiar with Georgism but had a vaguely positive impression, and reading the review I liked it a lot.
I don't think I've ever heard a coherent argument against Georgism, which seems like a gap so if you have one I'd be interested.
I did like it because Georgism was completely new to me. Definitely my favorite review so far (incl all nonfinalists) even when penalizing for the length.
I have times when I am too busy to read them. Then if I want to catch up I have to search the generically named “book review” emails to figure out where I left off. I don’t think the reviews are added to a central location but it would help.
Agree; I think that the only reviews that have been really worthwhile were "Progress and Poverty" and "On the Natural Faculties" possibly also the one about LBJ. We did not need to see 12+
I'm an entrepreneur who has founded and exited (some successfully) a handful ventures. And yet, I still don't know the best destination for finding co-founders/partners for future ventures. I have a few semi-evolved concepts that I'd like to explore. I talk to friends and friends of friends, which is certainly productive. But I wish there was some kind of a matching service where one could meet people and immediately jump in on some brainstorming sessions and the like. Any suggestions?
Seems like it would be hard, as you need an extremely high level of trust to start a company with someone. The only way I can see it working if you don't have a preexisting relationship is if both parties are extremely wealthy and high status, so that the reputational costs of defecting outweigh the short term financial benefits.
Wouldn't you have the same concerns with finding long-term romantic partners? And yet, many of the traditional "trust"-based systems are being replaced by online matchmaking.
I'm perfectly willing to date a person for years before deciding whether or not to marry them. Typically, you want to launch your joint business venture faster than that.
Hmmm. What should require a higher-friction entry point: A multi-year romantic relationship or a multi-year professional relationship? I’d say it’s pretty even for me.
You can exit a romantic relationship early without much cost if it is clear it isn't working out. Exiting a business partnership early can incur a lot more cost, so it requires a lot higher trust at the start.
Doesn’t that assume that the “relationship” starts on day one? I’d say that there is often an extended period of collaboration (6-12 month is not unusual) before commitment. Also, often there are more than two founders, so perhaps the risk is diluted somewhat. Does this mean that we should change our metaphor to polyamory relationships? (Something I don’t know much about).
Alice is given two bags of gold. She gets to look inside each of them and see how much it contains; the value of the contents are IID unif(0,1) (for the sake of the puzzle we shall assume that gold is infinitely divisible).
She picks one of the bags and shows its contents to Bob. Bob then chooses one bag, and Alice gets to keep the other.
What is Alice's optimal strategy? What is Bob's optimal response?
Each of them is making a decision based on their guess of what the other will decide. So if you think you can outguess the other person, you do that. If you don't think you can outguess the other person, you base your decision on a coinflip, thereby setting your odds to 50-50.
I'm assuming that the bags have unequal value, and that Bob can't tell which one is better.
Wait, can Bob tell how close the bag he sees is to the maximum or minimum value? I don't actually know what "IID unif(0,1)" means.
In that case, I think Alice should show the bag that's closer to the median possible value, in order to give as little information as possible about which bag is better. And Bob should choose the bag he sees if it's above the median, and the other bag if it isn't.
I think Bob still gets only 0.5 expected payoff from using Bullseye's strategy. Half of the time the unseen bag will be smaller than the shown bag, and half the time it will be larger, and those two regions of possibilities are symetric around 0.5, regardless of whether the shown bag is above or below 0.5.
He'll get it right only half the time, but he'll tend to get it right when it matters more and wrong when it matters less.
When they're both high, or both low, he'll get it wrong, but the difference will probably be small. When one is high and the other is low, he'll get it right, and the difference will probably be large.
I've thought about this some more, and come to the same conclusion.
Firstly, Bob probably doesn't know whether the unseen bag is high or low. (Either because Alice's strategy conceals that information, or because he doesn't know her strategy.) In that case, the unseen bag's expected value is the middle value, so he should make his choice based on how the seen bag compares to that.
If Bob uses the above strategy, and both bags are high, Alice should show the one that's less high so he'll pick that. If Bob uses the above strategy, and both bags are low, Alice should show the one that's less low so he'll pick the other. If Bob uses the above strategy, and one bag is high and the other is low, Bob will pick the high one regardless of what Alice does.
If Bob chooses at random, it doesn't matter what Alice does, so she might as well assume he's using the right strategy. (Also, he's very unlikely to choose at random if the bag he sees is very high or very low.)
When both of them use the right strategy, Bob will only pick the right bag when one is high and the other is low (which is half the time). But he choice will tend to make a bigger difference when one is high and the other is low.
I'm still not sure I understand your reasoning here's. I think you have an issue here:
> Firstly, Bob probably doesn't know whether the unseen bag is high or low. (Either because Alice's strategy conceals that information, or because he doesn't know her strategy.)
Note, it's a premise of the problem that Alice will choose the optimal strategy. With that given, we can be absolutely sure that Bob doesn't know whether the unseen bag is high or low. If he did, he'd be able to rob Alice every time, contradicting the premise that her strategy was optimal.
Where's the issue? You quote them saying Bob doesn't know whether the unseen bag is high or low, then you provide another argument that Bob doesn't know whether the unseen bag is high or low.
(Although you're argument doesn't always hold. If the bag Alice shows is high, then knowing that the other bag is also high doesn't tell Bob which he should pick. And the same goes if they're both low.)
I'm trying to understanding the overall reasoning better by picking at one particular point where we're not aligned. Bullseye says Bob *probably* doesn't know whether the unseen bag is high, but my understanding is that there's no way at all Bob will have information about the unseen bag, so there's some discrepency here about our understandings of the underlying structure of the problem. I'm just pulling at a loose thread; I don't have a grand counterargument at this point.
I didn't calculate so far but that seem likely on the basis of my analysis of the case where bags contain 0, 1 or 2 with equal probability. Alice can only have a chance of winning if one has 1 and she shows it to Bob. But Bob can pick randomly and will get 1 on average.
Bob can always go for the "pick a random bag" strategy. So Alice can never do better than 50/50 on getting the larger bag.
Can she enforce that? Yes. She shows him the bag whose amount of gold is closer to 0.5. It's easy to see the being closer to 0.5 is an independent property of having more gold, so this gives Bob no information about whether he should switch. Therefore again, Bob has no option better than picking at random.
(An interesting follow-up I don't have an answer for: In this scenario, Bob can keep his EV but reduce his variance by always taking the bag he sees. Is there a strategy that also doesn't let Bob reduce his variance?)
> She shows him the bag whose amount of gold is closer to 0.5
> ...
> Bob has no option better than picking at random.
Maybe I'm misunderstanding what you're saying, but I don't think this is true in this case. I think bob sticking if the bag's value is > 0.5 is more optimal than 50/50.
Effectively, if bob is shown a bag with 0.6 gold in it, and he knows alice is using the above strategy, he knows that the other bag either has >0.6 gold, or <0.4 gold.
He also knows, based on random distribution, that those two outcomes are equally likely. As such, if he switches there's a 50% chance he gets <0.4, and a 50% chance he gets >0.6. However, if he sticks, there's a 100% chance he gets 0.6. A 50% chance at <0.4 or >0.6 has an expected result of 0.5 gold, and sticking has an expected result of 0.6 gold, so it makes sense to stick. The opposite logic works if he's shown one with <0.5 gold.
This is correct. Bob can only get the larger bag 50% of the time, but he'll end up with more gold on average because he usually wins when the difference is big and loses when the difference is small. With optimal strategy if the difference between the bags is 0.05 he'll almost never win, but if it's more than 0.5 he literally can't lose.
I think this is still Alice's optimal strategy, though. If Bob sticks to this strategy - he switches bags iff he sees a bag with <0.5 - then there's no way to beat him if you get one bag with >0.5 and one bag with <0.5. The best you can do is always beat him if both bags are on the same side of 0.5, which this strategy does.
Without having looked at any other replies, here's my attempt (although I'm not totally convinced):
Alice shows the bag with the amount of gold with the smaller absolute distance from 0.5 (if they have the same, show either one).
Assuming this is, in fact, the optimal strategy and Bob has reasoned as such, then he has been given no information about which bag contains more gold (since half of those bags of gold with more extreme amounts are larger than the one he has seen, and half are smaller). He is forced to take a 50/50 guess.
Each of Alice and Bob end up with 0.5 probability of getting the bigger bag, for I guess expected value of 0.5 each, each game.
You're right about Alice's optimal strategy, but Bob can still do better than 50/50 I'm pretty sure. Bob doesn't get any information about which bag is bigger, but he does get information about how big the difference is likely to be in each direction.
Assume Alice follows that strategy, and shows Bob a bag with value of either X or 1.0-X, with X < 0.5 in either case.
Bob knows that the unseen bag U is either in the range 0 < U < X, or 1.0-X < U < 1.0.
Those ranges are symmetrical around 0.5 and the probability distribution is uniform, so Bob's expected value of U is always 0.5.
Therefore, if Bob sees a bag with value > 0.5 he should take it, and if he sees one < 0.5 he should take the other. You're right that he only gets the bigger bag half the time, but he ends up with more gold on average because he usually loses when the difference is small and wins when the difference is big. If the difference between the bags is only 0.05, Bob loses almost all the time. But if it's bigger than 0.5 he will literally never lose.
Bob's winnings in 10k rounds always picking random: 5033.5036381671935
Bob's winnings in 10k rounds always picking the bag if its value > 0.5: 6302.920417899139
That's pretty representative. You get around 0.5 if you pick randomly (obviously), but if bob picks the bag if it's >0.5, and switches otherwise, his expected value is ~0.63/round on average.
This matches up with what you're saying, just wanted to add that if you simulate it with code, you get the same outcome.
Thanks for checking that. I might be misreading something, but I think this line is wrong?:
bob_sees = if (b1 - 0.5).abs > (b2 - 0.1).abs then "b1" else "b2" end
Two issues actually: first, should be (b2 - 0.5) instead of (b2 - 0.1). And second I think you're showing Bob the wrong bag? Since the absolute value on each side is the distance from 0.5 so if b1 has a bigger distance then b2 should be the one you show him.
Awesome. That's that, then (I note the correction you made further down, but I'll reply here).
I think my big mistake was aiming to "get the bigger bag" rather than "maximize the gold payoff". In retrospect it's obvious that for-sure payoff of >0.5 is going to be better than E(payoff)=0.5 in terms of payoff, even if it doesn't expect to give you a larger bag than the other person.
Now I wonder, with Bob's better strategy in mind, does that call Alice's strategy into question? If that's Bob's optimal strategy, then that means Alice is forced to get robbed every time it's a very-small bag vs a medium-large bag, which makes me uncertain whether there's not some way to improve her strategy.
I think it's still Alice's best option. With Bob's strategy, he's guaranteed to always win in cases where one bag is above 0.5 and the other is below, no matter which Alice shows him. At least with this strategy Alice is also guaranteed to win when both are on the same side of 0.5.
Bob just has a better position overall- the power to make the final choice just counts for more than Alice's advantage in information.
There can be no strategy that favours either Alice over Bob, or Bob over Alice. This is because each can always pick at random which means their expected gains will be the same.
If you think you have found a winning strategy, then remember that random selection by the other player will render your strategy ineffective.
(This is similar to how there can't be a winning rock-paper-scissors strategy because one player can always play randomly.)
If this were a simple odds-evens game, you'd be right. But it's not; the game is rigged in Bob's favor. If he uses the right strategy, he'll come out ahead no matter what Alice does.
The fact that Bob can see one of the bags means that he can guess which one is better. And the bigger the difference, the more likely it is for his guess to be right.
The strategy if showing the bag closer to 0.5 gives Bob a utility of 0.5 + 1/12, whereas the lower bound on his utility is only 0.5. (by the strategy of picking one of the bags at random)
Is there a proof that this strategy is optimal? What is the source of the problem? Thanks.
I think it's a good case study (no pun intended) on how things can seem easy if you are used to them, but they are really quite complicated for outsiders to learn.
We're fully aware our number system makes no sense. At least there's enough redundancy that you'll just sound weird but perfectly understandable if you mess it up. There's basically no chance of passing for a native speaker if you aren't one anyway, and we're used to foreigners butchering the language - it's still impressive when someone makes an attempt.
BTW, reading should be way, way easier than speaking, but some of Lem's works are a wild ride even if you do know Polish. They're worth it, though!
Does anyone have experience dealing with industrial-grade procrastination when it comes to doing work? I'm currently doing some freelance content-writing work, all of which is tedious but manageable, and every time I try to do any of it I spend >90% of my time scrolling through social media, random websites and the like. Sometimes it's educational (reading ACX and other similar content) sometimes it's absolutely mindless cycling between the same six websites (it brings to mind the slot machine gamblers in that book review the other day). I'm not incapable of focus generally – I can spend hours reading something or working on a particular interesting project – but focus on demand really eludes me.
Deadline pressure helps, but I find myself waiting until the deadline is imminent before getting anything done, and even then not always managing to meet the deadline. I'll mentally put it off ('the deadline is 6pm, but they won't actually check until the morning so really the deadline is 7/8/9am the next day') and it can do bad things to my sleep schedule.
I don't think this is an unusual problem (and I'm aware it's made much worse by the work being boring and working from home on my own schedule); does anyone have any useful insights into working with it? I'm theoretically open to trying Modafinil/similar, but my worry has always been that I'll take it and then fixate on something irrelevant instead of the work that needs doing.
carelessness and lack of attention to detail – not remotely
continually starting new tasks before finishing old ones – no more than typical
poor organisational skills – not at all
inability to focus or prioritise – yep
continually losing or misplacing things – almost never
forgetfulness – rarely
restlessness and edginess – not generally
difficulty keeping quiet, and speaking out of turn – not an issue
blurting out responses and often interrupting others – not an issue
mood swings, irritability and a quick temper – not at all
inability to deal with stress – I can handle it reasonably well
extreme impatience – no
taking risks in activities, often with little or no regard for personal safety or the safety of others – for example, driving dangerously – generally not
I had this problem when I was finishing my dissertation, but I did kinda overcome it. (I finished the dissertation, anyways.) This is what I did, ymmv:
Step one was to train myself to track when I was actually working and when I wasn't. I started keeping a document where I'd sort of "clock in" and "clock out" of work, in a very fine-grained way, so any break or distraction wouldn't count as clocked in. With some practice I got to a point where I could actually remember to "clock out" before I began scrolling through random websites. So then I had a way of knowing how many minutes I'd spent actually working out of a long day sitting in front of my computer, and could meaningfully set myself a goal of actually-working for, e.g., 4 hours that day. To some extent, just tracking it helped me to be more mindful and do a little bit better.
Step two was to get some sort of enforcement mechanism in requiring myself to achieve however many hours of work a day. A good reward for me was to play video games, so I made a rule that I couldn't play any video games until I had done my 4 hours of work. If I finished early, I could bask in the joy of video games for the rest of the day with a blissfully clear conscience; if I got stuck procrastinating, I wouldn't get to play video games at all.
It was an important part of this to make the work requirement fairly modest. If I'd said I needed to do 8 hours of work on my dissertation, that would have felt so hopeless that I wouldn't have even tried. Sometimes I set the goal at just 2 hours a day. It was still better than trying to do everything in the two days before I have a draft of a chapter due.
When the above approach started failing because I started cheating and playing unearned video games (!), I signed up for Beeminder (https://www.beeminder.com/) so I could arrange to be charged actual money for such failures. Ridiculous, but it did often help when things were really bad.
I had a very strategy for my dissertation (+1 for beeminder!) except that I pegged it to pages written or re-written if I was editing that day. I think I wound up paying once or twice but eventually the habit was engrained enough and I liked seeing how far above the redline I was once I got in a good productive groove. I think this probably took a few months, YMMV.
Other things that helped:
1) Getting into the office way earlier than anyone else. Like if your advisor usually gets in at 8:30, be at your desk writing by 7:30. Anecdotally more of my labmates preferred working around midnight to 2AM, but I think the principal (of avoiding our advisor when we needed to focus on writing) was the same
2) Telling everyone I knew that I was approximately 6 months from defending. One of those days it was even true
3) Finding collaborators to help with writing a review article that wound up being the first chapter of my dissertation, same for the 3rd chapter come to think of it
4) Realizing that I had 0 desire to stay in academia and the only thing holding me back from getting a real job grown-up job was finishing this fucking dissertation so let’s stop screwing around already
Good luck! If I had to do it over again, I probably wouldn’t. Even so I do feel proud of getting a PhD and if nothing else, no matter what, every time I go to a bar to order a drink I _always_ get…just what the doctor ordered ☺
Yes! I have been having to fight this problem my whole life and I think lately I've gotten pretty okay at it. My advice is super specific to *being me* so it might be useless to you. That said:
1) Identify the root cause of what causes me to get distracted.
2) Come up with a plan that addresses those that I AM ACTUALLY CAPABLE OF STICKING TOO
3) Do everything in the most brainless most simplest way possible that plays to my natural strengths
Root causes:
For me it's three things: task switching, anxiety, and not knowing what I'm supposed to do this very second. It's easy for me to just to a monotonous or boring task for HOURS uninterrupted, but it's hard for me to deal with "what should I be doing right now?" or coping with feelings of inefficiency and etc.
So I realized task switching was very detrimental in particular because it PRODUCED the other two root causes -- I'd be anxious I wasn't getting enough done no matter what I was doing (because I had two tasks to do, so working on one felt like slacking off on the other) and I would constantly feel like, wait what should I do right this second? Before I know it I'm scrolling reddit and refreshing hacker news for the 80th time.
Solution:
Give every major task an ENTIRE DAY unto itself. So if I'm splitting my month between a freelance project and my own project, I set up a weekly schedule. Monday and Wednesday is freelance client gig. The other days are personal work days. ALSO, crucially, I schedule client work days for days I have meetings with the client. Big interruptions like meetings (especially that cut the day perfectly in half) need to be mitigated, so at the very least I can make sure they're relevant to the project I'm working on that day.
This way, even if I have down time, I don't get caught in a secondary loop of what I should be working on. Off task? Okay, brainlessly go back to the ONLY THING I'M WORKING ON TODAY rather then get caught in an eddy of "task a? task b? task a? task b?" oh gee now I'm on reddit.
The next thing I need to deal with is the difficulty of starting things. That's the entire challenge. If I start something, I will do it. If I don't start something, wow, I have 5 hacker news tabs open and where did the hour go? So to crack this nut, I will do the "pomodoro technique" which is just start an egg timer and try to get as much done as you can before the 30 minutes is up. Except I don't care how much I get done, I'm just starting the timer so that I will actually get started on my task. When I start my task, I also give myself the goal of doing the simplest, dumbest, most brain dead task that could conceivably be considered part of the task. That gets me started, and an hour later I'm deep into the work.
So that's what works for me. No clue if it works for anyone else.
I think I have been naturally headed in the direction you are talking about (due to the "wtf this very second?!?!" thing / deciding between 2 tasks) but I really appreciate seeing your analysis of root causes and direct solutions here. Lightbulb moment. Thanks!
For writing in particular, I have two pieces of advice. (No guarantee they'll help: your mileage may vary, etc.)
1. In high school, my best friend was the editor in chief of our school newspaper. He had a great quote: "The thing that takes the most time when writing stories is procrastinating." We fool ourselves into thinking that procrastinating is okay because we're accomplishing other things, but if you mentally reorient to count all of that time as spent, unproductively, on what you're supposed to be doing, it can help to improve focus.
2. I wrote my PhD thesis a lot faster than most people. My key insight was that a crappy sentence on the page was worth a lot more than an elegant paragraph in my head. Editing is a lot easier than writing, but you have to get the first draft down first. Commit to writing e.g. 1000 words a day, and don't get up from the chair until you do. Disconnect the internet if necessary. This isn't especially novel - many prolific writers work this way.
Yes!! I have the exact same problem--I procrastinate by reading pretty much useless things online (ACX, news, commentary, etc)/scrolling through social media until I can procrastinate no more. For me, it's because if I don't plan out a project and time-manage myself perfectly, I hate the feeling of not having any time to do it and the feeling that I'm irresponsible for not time-managing myself well, so I just push it off more and more which obviously backfires in the end. And it's not that I don't like working on what I'm putting off--so then I get really into it right before it's due.
The thing that works for me and for why I procrastinate so badly is to just sit down and write/do the work--don't do anything but the task or else I'll get distracted and lost in whatever random thing happens to interest me. It's really easy for me to do more mindless things without procrastinating because with those I can multi-task. But for things where it demands all of my attention, what I do is sit down, give myself a lot of time, and do it. Make it a little more mindless and get into a rhythm in whatever you're doing so that it's harder to get distracted. Obviously if you're writing you need your brain more than doing dishes, but I think that if you get out of your head a little bit and just get into a flow in what you're doing it's a lot harder to get lost in the internet/anxiety.
I’m still one hell of a procrastinator myself but the thing that helps is tricking myself. Let’s say if I’m being honest with myself, today will be like most days and I will do nothing. I simply find the SMALLEST possible action that will get me closer to my goal. It can be getting out of bed. Often, it’s, I’ll check my email, then I’m allowed to go do whatever I want. I can literally open it, look at it, and close it and achieve more than some days. Luckily, something will often catch my eye on the email, and I’ll get to work. Sometimes there’s some false starts, but at least there’s now 0 barrier whatsoever to checking my email, it’s actually kinda wearing on me NOT to check it. I know what I’m doing to myself, obviously. But it can be really effective because for me the hardest part is starting. People will say “just start” but that can feel a huge commitment when you’re laying in bed. I say “think about starting”. “What’s the smallest action I could take to start?”. Or “let me go look at the page I would go to IF I were to start right now, no pressure at all”
The art of procrastinating by learning how not to procrastinate, not to be confused by procrastinating by working on some other task that you also have to do, both solid forms of meta-procrastination, judoing your dumb brain into getting stuff done™
Fully realizing (visualizing; paying attention when they happen) the payoffs of each alternative decision has helped me.
Of course this only works of you'd truly prefer getting the work done earlier. If you'd do the work in time and afterwards feel restless or guilty for doing nothing, then your brain is *right* in choosing procrastination.
For me, modafinil works really well. But it depends on the dosage. 1/2 of a 200g modalert pill is too much: I get exited and very creative about lots of things and can't concentrate anymore. But 1/4 pill, 4 days a week, enhances my concentration- and the satisfaction I get from work.
More specifically, my work-procrastination problem has the form of a very high perceived mental exertion for a lot of tasks, and modafinil lowers this exertion significantly.
Making notes. A piece of paper (each task gets a separate piece) and document everything related to the task. "This needs to be done: ..." "This is known: ..." "I need to ask X about this: ..." Whenever I lose focus, I look at the paper to find the next step.
Having a colleague to discuss things with. Discussing is in some sense a form of procrastination (while you are talking, you are not working), but the kind that allows you to work better afterwards. If you have the right person to do pair programming with, it's a miracle.
Walking away from the computer. That means I am not working, but I am also not browsing Reddit. Maybe a quite productive way would be to keep walking, and when I get an idea what to do next, return to the computer and do it, then walk away again.
What hurts me:
Like larsiusprime said, working on two projects at the same time, so even when you make progress on one, you feel guilty for procrastinating on the other. Even better, while you are making great progress on one, your boss interrupts you and asks about the other.
Nth for "I'm in the same boat." I'm a lifelong procrastinator who recently started a personal project, and the lack of any externally imposed deadlines hit me like a truck. What ended up helping me the most so far were two things:
- Coworking with a roommate who WFH (more generally, changing your environment so that slacking off is punished)
- Credible, meaningful positive reinforcement. Find something specific you want and resolve to not get it until your goal is done (in my case, it was a game that came out kind of close to my self-imposed deadline). Alternatively, find something to give up that will sting a bit, like sweets, until you're done.
(disclaimer: I also have work projects I'm procrastinating on that I don't care about as much. I haven't tried applying the above solutions to them yet and I'm also looking at finding some stimulants to help out)
I'm not sure if this is a direct continuation of a previous discussion, or a standalone comment, but I don't find this argument at all compelling. Many people have been called racist because they decided to investigate the issue rigorously, or because they took a position like "I don't know if there is a difference, but someone should investigate to see".
The reason is not that they investigated the issue per se, but that the reason they felt the need to and the amount of convincing it took for them to reach their conclusion. See e.g. Murray burning a cross when he was a teenager. This sets a very high prior on "maybe Murray really didn't like blacks to begin with" and makes it hard to believe that he came to his conclusion about HBD from dispassionate investigation of the evidence.
Now if a genetics prof like Graham Coop suddenly argued for HBD or called for an investigation, that'd be more surprising
What are you even trying to argue? Are you trying to argue that this is always the case? That is what it seems like you are saying, but it makes no sense.
In any event, because it is a thing that we don't know yet is a good enough reason. No one needs any other to do science. And plenty of people have been attacked *before* they reached a conclusion. Merely for doing the studying (or advocating it be done). I find your explanation lacking.
"When people throw racism accusations, they don't blame you for coming to the conclusion that black people are genetically dumber - they blame you for being easily convinced by bad/unrigorous evidence (e.g. HBD),"
The vast majority of people who will throw racism accusations do not know enough about the evidence to conclude that it is unrigorous. They blame you for coming to the conclusion, then assume the evidence must have been unrigorous.
Note that your behaviour in this very comment -- just asserting without evidence that all HBD is self-evidently unrigorous and that any academics supporting it are discredited, then assumes from this unsupported assertion that your opponents must have ulterior motives to not accept this -- is exactly the behaviour I would expect from wanted to rationalize a disgust reaction to a conclusion they never seriously considered, and not what I would expect from someone who actually had valid counterarguments to my "racist" beliefs.
I don't really get this. I don't have a disgust reaction to HBD being true. I'm not a public figure whose brand is being very woke. I'm not on social media. I'm not black. I don't partake in virtue signaling since I'm posting anonymously on the ACX comment section where I know my comments will be badly received. I don't know my IQ but my education (STEM major followed by bioinformatics PhD) basically ticks all the boxes for IQ correlates or what have you. Therefore, I have no incentive for "pretending to dislike the evidence" or whatever the hell. The worst that'd happen if I'm wrong is that I'm proven wrong on the internet, the horror.
In other words, in Moldbuggian rhetoric, I am completely removed from power incentives and I can therefore argue for or against HBD without any of the passion that comes with arguing for the sake of looking good, and only the kind of 'passion' that comes from anonymously arguing on the internet, which unless you're the extremely online kind or a teenager isn't that high to begin with. And even with all that, I'm sorry to say - but HBD is bullshit. It's basically unheard of in actual genetics circles. Their proponents are either frauds, or not geneticists, often both.
The common retort to HBD being completely discredited by the genetics community is that everyone knows it's true but is afraid of "uttering the truth" or something, which basically amounts to a giant conspiracy theory with no one at the top - Moldbug calls it the cathedral because granting big nothings a name gives them more solidity. But it's not that great a retort as people think it is, since you can use that argument for basically any position at all.
I don't know what you mean by "HBD." The claim that there are genetic correlates of race as conventionally defined is obviously true — 23andMe routinely tells people their ancestry on the basis of genetic data. If you mean the claim that blacks have a lower average IQ than whites and East Asians a higher, that's a question for statisticians, not geneticists. Since we know that IQ is in part heritable and know that races differ in the distribution of heritable traits, it's genetically possible. The question is whether it's true.
The best evidence I have seen against part of that claim was offered by Chanda Chisala, who I think shows that African genetic IQ cannot be nearly as low as some of the HBD people claim, using statistical not genetic evidence. I summarized his arguments in a blog post (http://daviddfriedman.blogspot.com/2021/04/race-gender-and-iq.html). He was able to make those arguments because he took the position seriously and looked at its implications.
Can you explain what you mean by HBD, what information geneticists have that shows it to be false or what other reasons you have to regard it as "bullshit"?
I'm sorry if I've misread your motives, but I still don't know what you hope to accomplish by just asserting that HBD is bullshit and all its proponents are frauds with no further elaboration. Especially since you acknowledge that you expect your comments to be badly received.
A lot of the priors are kind of obvious, some people just need a peer-reviewed excuse to challenge the orthodoxy.
Ironically, every single one of the obvious priors can be explained by dysfunctional cultures shaped by outside pressure (I believe the fashionable term is 'systemic inequality'), so they don't validate HBD garbage in any way - but fundamental attribution error is a thing.
If you are convinced by weak evidence, then there are *two* possibilities: either you had strong priors that lie in the same direction as the new evidence *or* you had no priors at all, in which case weak evidence is better than no evidence.
The only situation in which weak evidence doesn't lead to a conformant conclusion is when you have strong priors *the other way*. Id est, if you are criticized for accepting weak evidence on lower black IQs you are being told you *should* have a strong pre-existing assumption that there is no such thing.
It's pretty reasonable to hold a completely uninformative prior on the relationship between most phenotypical traits and proximity of some majority of one's ancestors to the equator, so granting that, the prior on whether specific extant races of humans should be genetically pre-disposed to being more or less intelligent should be that no, they are not. So you should have strong priors in the other direction, seemingly for any traits that aren't directly related to heat or light, i.e. nostril size, ear size, average body hair, skin color.
How do you know what is related to heat or light, and what is not? Heat is related to energy, which is related to metabolism. Light is related to all kinds of chemical reactions in the body, which have an impact on mind (consider e.g. the seasonal affective disorder). Hypothetically, any cell in the human body could be impacted by this. Unless we have a full model of human metabolism, such questions can only be answered empirically.
If I were an alien knowing nothing about humans, I could make all kinds of hypotheses. Perhaps it would seem logical to me that black people living in Africa are the smartest on the entire planet, because they need to spend the least energy on heating their body, and therefore have more energy left to all other things, including cognition. Also, their days are longer, therefore more inputs and more learning. Or perhaps I would assume that intelligence is proportional to body size, so taller ethnic groups have higher average IQ. (Is it considered racist to assume that some ethnic groups might be taller than others, on average?) I might also assume that countries with higher population density have smarter people, because more interaction between humans leads to stronger selection pressure.
That's silly. The natural prior is to assume that people who *look* different are different in all kinds of nonvisible ways. That's why it hardly surprises us that black people are generally better at sprinting or jumping, e.g. that a white man hasn't won the gold medal in the Olympics since 1980 -- nobody is jumping up in astonishment at that fact, or suggesting it must proceed from a massive social conspiracy.
Correct. Every gold medal winner from 1980 on has been black. In fact, no white man has even qualified for the 100m final since 1980.
And at that, the 1980 race was a bit of a fluke, since the US boycott mean most of the world's best sprinters (American, and black) could not participate. If you ignore 1980 the last white winner of a 100m gold was Valery Borzov of the USSR in 1972 at Munich.
Banned for one week (border of symbolic and meaningful) for accusing other people of having bad evidence and sloppy reasoning without giving any justification or making the object-level argument.
I am being especially harsh because this commenter unprovoked started a thread about HBD, which is reputationally costly for this site and for the people who respond to it. I am grudgingly willing to tolerate this cost for interesting discussion that discusses socially or scientifically important/interesting aspects of the topic while trying its hardest to keep temperature down, and I won't ban people for well-intentioned attempts at that, but this isn't it.
I'm a professional software developer who's interested in getting into teaching myself some hardware stuff. As a somewhat arbitrary goal I'd like to (eventually) get to the point where I can disassemble some commodity electronics, and then solder together a super basic computer that can boot CollapseOS[1]. Obviously there will be many baby steps between now and then.
Any recommendations on how to onboard my skills as quickly and uncomplicatedly as possible? Background: very familiar with high level programming languages (Haxe, PHP, Python, Javascript) decent experience with C/C++, no experience with hardware other than being able to assemble a PC and vaguely knowing what a capacitor and a resistor is, and watching my college roommate solder stuff now and then.
One approach would be to grab a Raspberry Pi and cable it to a breadboard. If you are starting with a Windows, Apple or Linux box a good way to start with serial port fundamentals. Serial Port Complete is a good intro.
You will need USB Port Complete for most up to date machines. Same principles as SPC. Nothing like moving bits through a UART to get closer to the hardware.
With the Raspberry Pi cabled to a breadboard you will be turning on LEDs, responding to switches being thrown and converting analog signals to digital pretty quickly. Also a lot of good books available on Pi projects.
Also check out Code Project website. They have a lot is samples under IoT projects.
There is this really cool video series by Ben Eater where he explains how to build a programmable 8-bit computer from scratch (or very basic components, rather) and explains how all the parts work[1].
He's also got another series where he describes building a computer based on a 6502 microprocessor, how to program it, how to attach various peripherals[2].
Might be similar to what you want to do. But I think he does assume some basic electronics knowledge.
We've had some amazing guests recently on the Futurati Podcast.
Andreas Schleicher is something of an education policy guru, and has studied the educational systems of nearly every major country to look for common, successful policies:
We were joined by famed futurist (and highly entertaining podcast guest) Brad Templeton, who set us straight on autonomous vehicles and chatted with us about his approach to evolutionary ethics:
Elaine Pofeldt is a journalist who has made a name for herself studying 1-person businesses that bring home seven figures in revenue. If you're interested in EA or giving what you can, you should probably familiarize yourself with her case studies:
I came across a nice writeup on [How to Hire a Cartoonist to Make Your Blog Less Boring](https://mtlynch.io/how-to-hire-a-cartoonist/). Does anyone know of similar writeups on How to Hire a UI Designer to Make Your App Less Boring?
It might be a good idea to hire a UI designer to make your app more ergonomic, more accessible and/or improve user workflows. But making the UI less boring for the sake of being less boring a) quickly loses its novelty, b) wastes system resources and c) tends to get in the way of what the user is actually trying to do.
I got the sense that "having a community of nerdy peers for ones kids" has been a consistent wish for a subset the comments-section community. (from discussions with marshwiggle.)
So I wanted to share that this one extracurricular math program exists, and for the first time, is accepting applicants from across the U.S.:
It only occurred to me to tell you guys about it when I was amazed by the careful effort put into the "testing/filtering" process. (excruicating-sounding-to-me; I hate assessment) Full disclosure: I know this b/c I begin working there in-person in fall!
Also, it occurs to me that if any of you have a kid achieving well in Middle School (MathCounts, etc.) or H.S. math competitions, (AMC, AIME, etc.) and you're looking for resources, I might be able to connect you with something useful. (I am not really known to hardly any of you, but marshwiggle seems to think I'm good at math coaching stuff. That's my rec.)
Also, I would appreciate it if people who know they are coming let me know by email: ddfr@daviddfriedman.com. That's so we will know about how many we are feeding. If you decide at the last minute to come you are still welcome, but we would like at least a rough count in advance.
It looks to me like, when a banned user is subsequently unbanned due to their ban expiring, the ban-triggering comment disappears. (I see both Villiam's comment here, and Freddie's comment earning the first registered ban, as blank. By contrast, I see radrave's comment behind a "User was banned for this comment. [Show]" message.)
(Minor point: since people so far have generally lived for less than a hundred years and then died forever, this is basically the same as an indefinite ban. However, this might change if somehow people start living longer due to biotech or whatnot. The world'll probably be *very* different in 2121 anyways.)
Would you be able to explain why the gathering request attendees be double vaccinated? My understanding is that breakthrough cases are rare and deaths from said cases are extremely extremely rare, like way less than car accident death rare.
The vaccine apparently reduces the risk of catching the disease by about a factor of twenty. Being vaccinated and associating with other vaccinated people reduces it by about a factor of 400 — less because vaccinated people take fewer precautions, more because a vaccinated person who gets Covid is likely to be asymptomatic and so not very contagious.
At this point, in this area, anyone twelve or over who wants to get vaccinated can, so I thought the requirement was a reasonable one. As I tell my kids, redundancy is your friend.
It doesn't apply to small children, and the doubly part assumes one of the vaccines that requires two shots. Someone who has had one shot of a one shot vaccine is also welcome. So is anyone who has had Covid already and hasn't gotten vaccinated — perhaps I should have specified that special case as well.
Are there any numbers on the odds of a serious covid case after getting fully vaccinated? My understanding was that it is a lot lower.
I personally had covid already but I still intend to get vaccinated. I’m interested in this topic because I was under the impression that a fully vaccinated person has successfully lowered their risk of a serious covid case to the level of a risk that would be acceptable in any other context. I haven’t actually seen the numbers so this could be totally wrong.
I don't know the specific number, just wanted to describe my risk model. If you are exposed to the virus, the number of viral particles you initially get has an impact on (a) the chance you get infected, and (b) the expected severity of the disease. So it is true that for vaccinated people the chance is lower, but it is also true that the chance increases when spending a lot of time with an infected person indoors. Asymptomatic people also spread viruses, by breathing and talking.
Not sure about exact numbers, but I would guess that the difference between "fifteen minutes in a park" and "five hours inside a house" is at least as big as the difference between being vaccinated and unvaccinated. So on one hand you decrease your risk by being vaccinated, on the other hand you increase your risk by exposing yourself to many people. Both effects are real; they point in the opposite direction.
To calculate the actual risk, you also need to multiply this with the base rate of infected people in population. Ten seconds of googling suggest that the probability that a random person in California is currently infected is 10% -- that sounds too high, I probably made some mistake. Also, the number needs to be adjusted, because a typical ACX meetup participant is not a random person. (Not sure how exactly to adjust. Being smart is good, but being a contrarian can go both ways.)
Risk = general population sickness × number of people × time spend indoors × vaccination coefficient
The vaccination coefficient is good to have, but it is only a part of the equation. This may be an unpleasant fact for the individualists and libertarians, that sometimes what other people do has a greater impact on your life than what you do. Vaccination changes the result maybe 10× or maybe 100×, the behavior of the population as a whole could make a 1000× or maybe 10000× change in your personal chance of getting infected. (Trivially, if no one around you has covid, you can do whatever you want, get no vaccine, and still avoid infection.)
So the whole story is not "get vaccinated, then you are safe", but rather "get vaccinated, then you are slightly safer, and if many other people also get vaccinated and the prevalence of covid in population drops to near zero, then you are safe". Sadly, many other people don't give a fuck, or respond with indignation.
Much safer, not slightly. If I understand the numbers correctly, vaccination means approximately that if I am in a situation where, if unvaccinated, I have a 100% chance to get Covid, I now have a one in twenty chance - 5%. So I would have to get covid 20 times over to get it once, on average. Apply that one-in-twenty to everyone you're dealing with and since you both have to critically fail, you now have a 1 in 400 chance compared to dealing with the same people with neither of you vaccinated.
This is why I consider requesting vaccination important enough I'm currently willing to do it.
Also, 10% seems crazy - could you have somehow gotten the numbers for all cases recorded during the pandemic? My county's rolling average is currently ~30 new cases a day for a county of 2 million; cumulative cases are only 119,094 but compared to, say, LA we were comparatively mildly hit. Checking, total California cases look like a bit under 3.7 million and population is 39.5, so yeah, I think you got cumulative total, and most of those people should be immune right now.
I think being clear that those with immunity to Covid-19, whether natural or from vaccine, are welcome. There are a lot of people who have had Covid-19 at this point and it is bothersome that the government and media are ignoring clear science to try and shame then into getting an unnecessary vaccine. So it's frustrating to see the same blind spot repeated by rationalist.
Are you asking why I'm saying fully vaccinated, instead of just having received one dose of a two dose vaccine?
The first reason is that someone who received their first vaccine dose one minute ago is zero percent vaccinated, and "fully vaccinated" is an easier criterion than "either fully vaccinated, or received your first dose long enough that it should work, which I would have to look up how long it is".
The second reason is that AFAIK first vaccine dose efficacy is potentially pretty low against some variants, see https://www.bmj.com/content/373/bmj.n1346 . I haven't checked to see if this is true yet, or how common those variants are, but I would like to err on the side of caution.
The third reason is chumra (see https://en.wikipedia.org/wiki/Khumra_(Judaism) ) - people are going to fudge it, and so I would rather be strict enough that even when they fudge it a little everything is still okay.
Thanks Scott. I was actually asking if vaccinated people need to worry at all about being exposed to completely non-vaccinated people. My impression was that very serious breakthrough cases of Covid are extremely rare to the point of being at a level where that amount of risk would be acceptable in any other context. That being said I don’t really have the numbers so that could be completely wrong. Also, one could argue that a jab or 2 is not a lot to ask from someone.
I know I'm coming to this a bit late, but I'm wondering what everyone's thoughts are on meal replacements? It seems like a cheaper, faster, and *arguably* healthier way to get nutrients than traditional food. Am I wrong?
I'm in Canada and I'm vegan so I've been thinking of replacing 3/4 of my meals with Soylent Original: https://www.soylent.ca/products/powder-original (and creatine) after I do some blood tests. Other meal replacement products aren't really available here. Anyone else doing something similar?
It seems deeply unlikely to me that replacing 75% of your meals with soylent could plausibly result in a worse result than an ordinary diet. Replacing 100% of your meals, maybe (or 99%). 75%? Maybe if the diet you're replacing is really rigorously healthy.
Humans have survived across a very wide variety of staple diets over millions of years, without major problems. If the Inuit can survive on seal meat for 75% of their diet, people can do fine on Soylent for 75% of theirs.
Yes, maybe Soylent is missing a few things that you need very small amounts of. Maybe it is not wise to eat a diet that's more than 95% Soylent. But 25% of your diet is plenty of pick up plenty of the nutrients that are so unimportant in quantity that we haven't managed to identify them.
Soylent lacks epistemic humility. There have been several times in the past that we have, as a society, confidently believed we knew everything that was nutritionally important. We were wrong about vitamins and micronutrients, and so we probably still are.
Therefore, sourcing all nutrients from pure substances (mostly powders) is most likely lacking in some important but currently unidentified nutrients needed in small quantities. If you want a meal replacement, make it from whole foods which have the right balances of macro- and micro-nutrients. (This is substantially more work. Tough luck.) I have a recipe, but I'm reluctant to share because the person I got it from (also responsible for MealSquares) was planning on marketing it and took down the public copy for that reason. My understanding is that ground sunflower seeds, oat flour, and marmite are a good set to start with for the micronutrients.
I'm also vegan so I take B12, iron, and D3 supplements. I haven't been keeping this up though because it takes up more time than just ordering or making something less nutritious. Trying to make a change. . .
Anyway, here's something that I find to be reasonably priced and very low effort.
Freshe salad toppers. They're canned tuna with some veggies, available in four flavors which aren't terribly different from each other. The spicy one isn't very spicy.
A pack of ten costs $40.
They aren't huge meals-- 260 calories/can and they're better with some fresh salad.
Anyone else have suggestions for the best money/effort combinations for real food?
I have tried for years to do something roughly like this, mostly for convenience. I just don't want to have to devote very many of my scarce waking hours to meal planning, prep, cooking, and cleaning. Soylent itself didn't work out super well. Whether or not it has theoretically most or all of the nutrients needed to keep a human body alive, it isn't very filling and it isn't very satisfying. One serving is 400 calories, and an average sized male is going to need at least 6 of those to operate at maintenance, assuming you don't workout or have an above-average level of activity, which of course, you should if your intention is to be healthy, in which case you'll need even more.
For what it's worth, meal services have gotten reasonably good. Plenty will deliver pre-cooked, pre-packaged fully contained meals made with entirely whole food ingredients rather than synthetic replacements like Soylent. I've been trying The Good Kitchen for about five months now and it's largely working out. I still don't think you can replace everything this way, but I go for 28 meals a week, which ends up being about 1800 calories a day. I need about 2700 for maintenance, so usually one additional handcrafted, self-cooked meal on top of this is sufficient.
I have no complaints with Good Kitchen itself, but they use UPS as a delivery service and UPS is extremely unreliable. They've missed shipments, frequently damage the box, split into multiple shipments with one arriving days late. They're supposed to deliver each Friday and I still haven't gotten last week's shipment yet.
Classified want ad (thanks Scott for okaying this)
I work for a tutoring company (NYC based but almost all remote) with rather well-heeled clients. We're looking for someone who can teach, roughly, the sort of topics you'd cover in the second half of a programming degree, but with a more practical bent - web design, making mobile apps, maybe game programming. The target students would be high school kids who did well in AP CS and the like and are looking for help with next steps. The pay should not disappoint. This is contracted work but there's the option for full time and benefits if you do well.
Please have:
* Bachelors, preferably from a name-brand school.
* Experience teaching advanced CS one on one for several years.
* 3+ tutoring clients/families who can recommend you.
Plusses:
* 7+ years of tutoring experience
* Familiarity with elite NYC private schools (think Gossip Girl) and their culture.
* Ability to teach other STEM topics (physics, chem, etc).
Sounds like this might fit you? Email a resume to patrick24601 [at] gmail dot com .
Because this is likelier to be noticed here than on the Arabian Nights post, here is a DeviantArt post of a map (with copious explanatory footnotes) depicting a world where magic is real and the modern world is gripped by a Cold War between the Idealized Middle East of the Arabian Nights and the Idealized Post-Roman Empire of the Arthurian Mythos: https://www.deviantart.com/quantumbranching/art/Arthurian-Romance-vs-Arabian-Nights-736657777
The entire account, Quantum Branching, is just maps of various althists and fictional scenarios, and seems like the sort of thing that readers of this blog might find *very* appealing.
It's not necessary that there be a high probability of near term superintelligent AGI for it to be worth spending a lifetime trying to avoid the problem. A prominent EA (forgot his name) put the probability of all X-risks combined at 1/6 this century, a probability high enough to easily justify spending lots of resources to prevent it, considering the near-infinite consequences of failure. Indeed, if every EA devoted all their donations to x-risk (which of course we don't) it would be a severe underspend at the societal level, if, that is, one takes the long-termist view (mainly because there are not many EAs relative to the size of society).
The link given waves its hands a lot regarding AI. Also, its probability estimates are hilariously bad.
I mean, let's go through the terms in his estimate for AI:
1) 50% on AGI that surpasses humans in all activities before the end of the century. This is directly lifted from Ord and I have no particular issue with it (I'm not well-versed enough to say yea or nay).
3) 10% that an AGI will have a reason to usurp humanity. This seems generally absurd; there will be more than one AGI created (likely hundreds if not thousands), and it only takes one to pass this criterion. Note that I'm dealing with #3 first because the desire has to come before the attempt.
2) 10% that an AGI attempting to gain control of society will succeed at usurping us. This despite the definition in #1 that the AGI surpasses humans in "essentially all human activities" (direct quote). Politics is a human activity. This amounts to a rote invocation of some "vital spark" that humans have and AIs can't have.
4) 10% that a rogue AGI that has successfully overthrown humanity will kill us or rule us forever. Scratch the second part of that disjunction, he's saying that >90% of the time a *rogue* AGI that has *defeated humanity* will decide to spare us. Seriously?! I'm not saying there aren't goals or hard limits that would accomplish this, but assuming 90% of rogue AI conquerors have the specific ultimate goals to avoid extinction (and it does need ultimate goals; there is no instrumental advantage to keeping humanity around once the AI is totally independent and unthreatened by us) is obviously insane.
500 years even at the slowest reasonable projection of computing capability growth is far more than necessary to get em-capable hardware. "Innovation is declining" doesn't cut it; 50 years of 20th-century innovation would be plenty, and we're not *that* much slower. (Fortunately, AI-capable hardware is very likely even lower in requirements.)
But even if it takes a millennium, that doesn't really matter. Silicon-substrate expansion is orders of magnitude faster than lighthugger generation ships, so even if we start trying to colonize other stars, at *most* a few systems get biological beings before the tech catches up and silicon overtakes them.
Is that supposed to be a convincing argument we shouldn't take it seriously? We're speculating about future technology here, this is already science fiction by definition.
The self-replication is of the entire ecosystem of factories, mines, refineries, power generators, and construction robots, not of each individual component.
The seed ship contains a fusion reactor, a store of refined materials (including fuel for said reactor), and a number of mobile construction robots. When it reaches its destination (i.e. some place with mineable stuff - icy bodies are good because you can make stuff out of organic materials and there's lots of hydrogen for your fusion reactors), the robots (powered by the fusion reactor - they can plug in to recharge) construct mines to mine the materials, refineries to refine them, factories to construct additional construction robots, and additional fusion reactors. When there are enough materials and construction robots available, new seed ships can be constructed.
The clear path is 'literally just emulate an entire brain'. It's probably grossly inefficient, but all it needs is hardware. That's the "em-capable hardware" I mean.
JSK said "50 years of 20th-century innovation", and suggested this would in fact happen within 500 years (i.e. a prediction that we are going to advance at least 10% as fast as we did in the 20th century).
Note that improvement at House's law (processor performance doubling every 18 months for identical power consumption) for 50 years would be 11 billion times modern performance.
Wait, when you say "silicon-substrate expansion", do you mean building things or travel time?
Travel time certainly won't be "orders of magnitude" shorter, as the acceleration limits of biological organisms are not especially relevant in interstellar flight (accelerating at 1g with a 99.99%-reflective laser sail, if I've done my maths correctly, requires a rather non-negligible heat dissipation of 146 kW per kilogram of spacecraft - 1g with a rocket of interstellar-capable exhaust velocity is far, far worse, as AM-beam has theoretical limits of about 0.3c exhaust velocity and 60% efficiency - and in any case accelerating at 1g compared to infinite only actually adds like 2 years onto a trip).
"Science LARPing" is very strong, needlessly provocative language, but... honestly seems pretty justified.
The replication crisis never really ended, and that was just the primary way that traditional science fell short of its *own* standards.
Publish or perish is not conducive to real investigation of important scientific hypotheses, and everybody knows it.
The completely malfunctioning journal system makes distributing knowledge slow and bad, and everybody knows it. Plus it ties the whole edifice to the even-more-badly-malfunctioning university system. (That I won't say everybody knows, not yet. But *you* certainly do, you wrote a book on it!)
So it's needlessly provocative. But not wrong. Par for the course for the Notorious E.S.Y.
Are you sure you're not part of "his community"? You're a fairly regular commenter here for one!
But that particular tweet doesn't seem that embarrassing, or embarrassing at all. Have you read Robin Hanson's posts related to that specific topic? Eliezer's point seems eminently reasonable to me with that context.
How did you manage to interpret a statement agreeing with someone about how people being able to completely ignore an incredibly common criticism of their field they would almost have to know about is indicative of uncritical social conformity as being about how "I'm the REAL genius"?
Similarly, have you ever considered that convincingly arguing that some specific trend exists generally requires actual evidence and not heavily implying that evidence exists ?
Implicitly accusing everyone working in the area of being fake scientists ("science LARPing") defrauding the public is a take of substantial temperature.
Sure, they're wrong, and vincibly wrong, but if the standard is "never be vincibly wrong and never teach error" then we're all going to hell, myself and Eliezer included. A standard of "don't maliciously teach error" is more reasonable, but I'm not seeing malice here.
I don't share your negative reaction to the tweet. If you have an argument for your distaste of it, you'll have to actually make it. (Is it the tone? Is it associating a supposedly-high-status group like academics with a supposedly-low-status activity like LARPing?) And is your problem even with Yudkowsky's tweet, or with Hanson's critique of scientists studying astrobiology?
Having read Hanson's article, his critique sure seems plausible, i.e. any books about the far future which claim that "aliens we meet would be much like us, even though they’d be many millions of years more advanced than us" sure sounds fundamentally flawed to me, likely irredeemably flawed.
And in the first place, what's your prior on far-future predictions by any random academic discipline to be anything but nonsense, given how extraordinarily hard such predictions are to make? Given insufficient incentives to make correct predictions, and a lack of tight feedback loops to form good intuitions etc., I'd expect a discipline's performance to be more like that of pundits than like that of Tetlock's Superforecasters.
Finally, we could take another step back and ask to which extent actual contemporary academia (vs. idealized science) actually deserves its high status, and hence to which extent whether they're (un)deserving of the tweet's ridicule.
I'm symbolically banning you for a day for this comment. I'm against people saying "Look how stupid this is!" without any explanation of why they disagree, with a possible exception for things that are really stupid in a hilarious way, which I don't think this is.
I already posted this on the subreddit, but I figure it's worth posting here as well.
For a couple months now, I've been trying to integrate Scott's posts about trapped priors (The Precision Of Sensory Evidence and Trapped Priors As A Basic Problem Of Rationality) into a more sophisticated theory about how my own OCD works and how to deal with it. I ran into some issues with this, as the model seemed to be incomplete in some way. Over the past month or so, I've compiled my "findings" into a long-ish blog post on my own Wordpress page (which I rarely use). I know this is a longshot, but I'm hoping that Scott sees it and is able to give some feedback on the model I've come up with. Feedback from others is welcome as well, especially mathematicians for the game theory related stuff in section III.
To clarify: This is about much more than just my own struggles with OCD, although that's certainly part of it. This is an examination of the trapped priors framework in general, and has a lot of broader applications for e.g. explaining self-serving bias and raising the question of how to counteract it without overcorrecting.
Hey, thanks for the response. I'm actually not so sure that my explanation of wishful thinking and yours are different - mine might just be putting yours in a more technical way. I'm not totally sure about that though, as I'm almost certain my framework is missing something.
I actually have never been properly part of the rationalist community - I've just been exposed to epistemology and metaphysics more generally for most of my life, both because of it being a strong interest for me and because of the almost unavoidable atheism debates online in the 2000s. To be totally honest, leaning into Bayesianism has improved things considerably - it's pushed me away from "you have to be 100% certain about everything or you can't believe it" and toward "it's okay to lean into what you already believe, even if it isn't certain, and adjust in small increments as new evidence comes in".
What you suggested is certainly something I've thought of before, but at this point I'm already pretty rigid about caring about truth - I was a lot more imaginative and willing to believe anything when I was a little kid, but I don't think there's any putting that genie back in the bottle by now. I honestly envy people who are able to believe in a religion on faith without needing to question it.
It's a nice idea, but I don't think this is what I'm looking for. It is somewhat relevant, since many of the questions I have OCD issues about are sort of adjacent to religion (mostly related to the metaphysics of consciousness), but as I said, I don't accept the premise that religious beliefs are false to begin with - as mentioned, I have some beliefs that are quasi-religious (like non-physicalism about consciousness, leaning toward substance dualism) that, on a purely rational level, I actually *do* believe are true. The problem is that, on a more primal level, I have pathological OCD doubts that are causing me to be far less certain of myself than I know (on a rational level) I should be. Taking your approach would be giving up on the *actual* truth value of these beliefs, and I'm not willing to do that - that would just be giving in to the OCD doubts.
> If one says to himself “the love DSP is untrue, and scheduling it into my life is to endorse falseness. I won’t do it!”, we might admire his devotion. But we’d probably be right in describing him as foolish — he is subject to a biological speed limit of rationality, and to pretend that this limit does not exist is to accept a suboptimal DSP schedule.
I wonder how strictly this holds. Buddhist monks who just sat still until they died come to mind. Part of the canonical explanation is something like a complete suspension of DSPs. And, an even more speculative example, some people with psychopathy seem to be able to consciously micromanage DSPs.
Your hypothesis that one religious service a week is in the neighborhood of optimal, while having religion pervade one's life is at least hazardous has a counterexample-- orthodox Judaism. It can go bad, but a lot of people find that having a highly ritualized daily life is satisfying, and it clearly isn't incapacitating.
R. A. Lafferty was a very odd and striking sf author mostly writing in the 60s and 70s-- his work was a combination of sf, tall tales, Catholicism, and just plain weirdness. There's a free online event coming up on June 12. For details see the link.
The featured work is _Space Chantey_, a tall tale science fiction retelling of the Odyssey.
A typical short story by Lafferty, to the extent that such a thing exists. The human race is accelerated to the point that people can have two or three full careers in eight hours. When I read it in the late 60s, it just seemed whimsical. Now it's rather prescient. https://www.baen.com/Chapters/9781618249203/9781618249203___2.htm
Another thing which seems prescient is that he wrote a number of reputation dystopias, where people's reputations were very easily destroyed.
I have always enjoyed Lafferty, and this line from the linked story made me laugh:
"“I will scatter a few nuts on the frosting,” said Maxwell, and he pushed the lever for that. This sifted handfuls of words like chthonic and heuristic and prozymeides through the thing so that nobody could doubt it was a work of philosophy."
Quick, what would be today's buzzwords so that "nobody could doubt it was a work of science" (social or otherwise?)
I don't have a handy answer to your question, but you'd probably like Lafferty's _Arrive at Easterwine_-- it's got Catholic theology *and* copious snark at tech startups.
I haven't read too many science papers, but "discursive" and "metastatize" always stick out to me as "nobody uses these" words. Maybe "deconstruct" too.
Question for people who understand the history of GWAS studies and the candidate gene approach: I understand that large scale GWAS studies have shown that many candidate genes are just false positives. As Scott showed in his famous 5-HTTLPR post, entire subfields of candidate gene studies have been built around false positives, only to be conclusively debunked by large GWAS studies.
What's weird to me though is that this seems to be the opposite of what I'd think would happen. All else equal, isn't a candidate gene study more responsible than a GWAS study? With a candidate gene study, you have a priori reasons to suspect a gene, which theoretically should reduce your opportunities to p-hack. In contrast, a GWAS study feels like it could be a fishing expedition across millions of nucleotides.
The reality of course was that it wasn't "all else equal". For some reason, the candidate gene people had a culture of p-hacking, whereas the GWAS people did not. For some reason, the candidate gene people had a culture of low sample sizes, whereas the GWAS study did not. And somehow, the candidate gene people never fully corrected for population stratification, whereas the GWAS people managed to find a way to do this.
My question is why? Is it just that geneticists are rigorous and psychiatrists are not? I feel like there must be a deeper reason that I'd like to understand.
Hello, geneticist here. I think the candidate gene approach is mostly a historic mistake. It made total sense to try to detect the effect of candidate genes with sample of a few hundred people when it was thought that most trait were influenced by a handful of genes. Furthermore, this is the kind of approach which worked very well to detect effects in plant and animal breeding programs (artificial populations produced by selected crosses can have genes with large effects segregating).
It took the on replication of candidate gene studies and many successful GWAS studies for us to understand that, with very few exceptions, most traits in humans are not influenced by a few genes with large effects but by many many genes with extremely small effects.
On the last politics allowed open thread there was a running argument about whether the left was uniquely more censorious than the right, cancel culture, etc. Here's some recent links that may provide interesting context
This isn't the current right, but it was a revelation to me when I thought about how much the mere existence of homosexuality was written out of public life until the 60s or so.
On China and how the post pandemic economic recovery and ensuing political capital gains may not be sustainable.
Interesting both because of the importance of the long term trajectory of China to, well, everything. And what it might say about the shape of the post pandemic world more broadly if recovery leads to only a short boost in popularity for incumbent governments.
Seems like the author is viewing China through a prism of a thoroughly American millenial point of view. As if the Chinese are just Americans with funny-shaped eyes and weird accents. Given the significant cultural differences history already tells us exist, the prognostication here strikes me as more than normally dubious.
For me to answer that question, *I* would have to have deep insight into Chinese culture, which I do not. However, I know that I don't know, and I know that trying to understand it via projecting the social fads currently sweeping the United States is almost certainly going to go badly wrong. What little experience I do have with other cultures -- meaning I have traveled and lived among them -- tells me that cultural differences are, if anything, stronger and more pervasive than they seem from a distance.
So I know enough to doubt the predictions a priori, without being able to offer anything better.
The fact that the (atrocious) treatment of non-Han minorities has any relevance whatsoever for majority-Han opinion of the government. Generally a significant underestimation of the popularity of the Chinese government in the Han population.
> Given the significant cultural differences history already tells us exist
On the other hand, when Chinese people are allowed to exist in a place free of Beijing's influence (e.g. Taiwan) then they pretty much wind up acting like Americans with funny-shaped eyes and weird accents. (An exaggeration of course, but there's nothing fundamental about Chinese people or culture that makes them any more inscrutably foreign than Japanese or Korean people.)
The convergence between the Taiwanese and Americans could be explained by the former living under the massive cultural influence of the later, rather than just being free of Communist rule.
My limited experience is that Taiwanese are quite different from mainland Chinese. I also rather suspect that mainland Chinese can no more be lumped together than, say, Knoxville County Tennesseans and Santa Clara County Californians.
The author, Elizabeth Economy, is a Senior Fellow at Stanford University’s Hoover Institution and Senior Fellow for China Studies at the Council on Foreign Relations. She has also lived in China for quite a number of years and published several well regarded books on Chinese politics. That doesn't make her infallible of course, but it seems of there's anyone you can give credence too it would be her
Well, as a skeptical empiricist, I pay a lot more attention to the consistency of an argument with what else I directly know than to the credentials of the person making it. You'll note I did not say I knew the author was wrong, I just said my doubt was a priori high. Now, had she begun the article with a solid exegesis on the *differences* between Chinese and American culture, and how those differences didn't matter for the theses she was about to explicate, I would have paid a lot more attention.
Something I'm curious about: it appears that the Sinovac vaccine has much lower efficacy than the Chinese government's claims; something like 55% vs the official claim of 90%.
Now, 55% is not sufficient to give herd immunity. On the other hand, I can't see the Chinese government admitting that its own vaccine sucks and going cap in hand to get two billion doses of the Pfizer.
So this seems to create a situation where everyone in China gets vaccinated with a 55%-effective vaccine which they are told to believe is 90% effective. Once foreigners start entering China again there's going to be a major epidemic among the already-vaccinated Chinese population.
From what I've read, the Chinese vaccines are good enough that prevent you from dying or getting seriously from the Coronavirus. It's not as good as the mRNA vaccines, but they can be made with existing industrial production more readily and thus are better for deploying en mass to the third world.
You might be underestimating the complexity of the problem. It's extremely hard to predict the course of an epidemic, even when you have copious amounts of data, as should be exceedingly obvious by now. And that's without even considering how the genetic evolution of the virus might alter the future in what are even in principle unpredictable ways.
China now has very hard and fast lockdown policies. So even if the immunity rate is low, of detecting cases in an area means you put the whole area into "noone go outside ever" scale lockdown then you can keep outbreaks under control
Interesting reading, however it seems to me the author is leaving out China's historical experience with separatism, namely that it leads to warlordism and warring kingdoms, which makes great material for movies and novels but is pretty terrible to live through. The most recent such phase ended in 1928! Given that Chinese people (in my limited experience) have a lot more historical consciousness than Americans, that's got to be helpful to the Beijing government.
It is also slightly ironic that the Gini coefficient of the PRC, 0.48, which is supposed to sound the death knell of the CCP, is slightly lower than the Gini coefficient of the USA.
I read a summary, I don't think it's going to have any impact, but I guess that just depends on how low people's expectations of government competence in the face of a pandemic were to start with.
I watched it all. I already suspected the big picture stuff: there was no alternative plan to "flatten the curve" in the early stages. The institutional failure seem pretty big and "trust the experts" would have been a disaster. We seem to be very lucky that the pandemic got up to speed so close to summer.
It's also interesting to hear the anecdotes about how dysfunction the government was. He tells it like an action story, with lots of heroes, villains and plot twists.
Some of the politicians interrogating him tries to score cheap political points but most of then were very professional. No-one really tried to defend Boris.
How much you should trust Cummings is debatable but I tend to like him. Hopefully evidence will get out that confirms his big claims.
Dominic Cummings didn't reveal anything that wasn't pretty obvious already and didn't bring any new evidence to the table. It is not going to change anyone's mind about anything. It could help the Tories kick Boris Johnson out of No. 10, if that's what they want to do anyway.
Whole thing is here for anyone who has 7 hours to spare. So far I got to a 2 hour mark and it´s fascinating, at least for nonBrits. Otherwise no opinion. https://www.youtube.com/watch?v=8LFS3FaRs_s
Is there a name for this type of relationship? It pops up all the time and probably is quite meaningful but I can't wrap my head around what exactly it means. None of the statistics textbooks I have can explain this case.
Similarly is there a name for the specificity-sample size tradeoff? For example let's say you are 40 years old and drive a ford f150 and you want to determine how dangerous it is to drive from San diego to Phoenix Arizona. You could look at accident rates for your car, for americans in general per mile driven, for the stretch of road you will be driving down, for people in your age group or 35-45 year olds driving ford f150's down that stretch of road over the past 5 years. Obviously the last category is the most relevant to you personally but the sample size would be tiny. The more you expand your sample size the lower quality the date you get is (the danger of driving a prius is very different from driving a truck, ect).
For the your first question, I don't know the name of it but I can think of a lot of examples. For example, being a Democrat anti-correlates with being a Republican, but both of them correlate with being American.
I mean... I hope I'm not missing something here. Seems like an obvious way to conceive of a negative feedback loop. But when things seem obvious I start to second guess myself. A lot of biochemical reactions look like that. A buildup of A encourages the formation of B which encourages the formation of C, which inhibits A.
Also not sure of a name, but that is often just a symptom of condition B being underspecified. For instance, the example here of being a Democrat and being a Republican both being positively correlated with being American means "American" is too broad a category to be causally explanatory. If you narrowed it to being a rural American, a Californian, a South Dakotan, now it no longer correlates positively with both being a Democrat and a Republican, so you've actually captured a geographic trait relevant to the distinction.
There is a more specific way in which that American, Republican, Democrat categories are statistically pathological, though. Republicans and Democrats are both proper subsets of Americans, so there is a sense in which "American" represents the universal set of all elements that can be classified as Democrat or Republican. So be especially wary of that. Clearly all elements of a universal set are correlated with whatever defines that universal set, which makes the definition meaningless. This is obvious for "the" universal set. Nobody tries to say that any trait X correlates with existing, as that is clearly a spurious and meaningless correlation. All possible traits correlate with existing. But similarly, all possible traits that can only apply to Americans will correlate with being American, so that correlation is equally meaningless.
A black populist Republican comes to prominence on a platform of opposing abortion and immigration, protecting black civil rights, pro-capitalism and anti-government sentiment: 50%
Herman Cain's twitter account runs for President: 95%
I mean, the Prominent Black Republican Leader is certainly a role that the Republican Party has been wanting to cast for a long time, but the problem is the lack of anyone to fill it.
There's been a parade of failed attempts to fill the role over the years. Ben Carson was smart but had some pretty weird ideas about pyramids, while Herman Cain had a bad history with the ladies. And neither of them had much of a stomach or taste for politics, they both thought they could skip the boring "Mayor" or "Representative" or "Senator" or "Governor" stages and jump straight to President, or at least maybe Vice President.
Then you've got Condi Rice who is well qualified but has no interest in trying to talk to regular people (and who fled public life in shame after the Iraq War), and Colin Powell who was never nearly as Republican as people wanted him to be.
which is pretty long odds but somehow doesn't seem long enough to bother with. Other candidates at 81-1 include Amy Klobuchar, Jeb Bush and Kim Kardashian. (Rand Paul is 101-1, worse than Kim Kardashian, ouch!)
Interestingly, Joe Biden is at 7-1 and Kamala Harris at 4.5-1. It's tempting to stick a grand on each of them, and a couple of small bets on every plausible Republican, because these odds seem way mispriced at the moment.
I'm pretty sure Joe isn't running again. His whole schtick is that he's a transitional figure. That said, I don't think Kamala automatically gets the nomination, and she seems to consistently underperform her fundamentals in elections, even ones she wins.
I don't know who's next up on my side, but I think if Tim Scott can survive the nomination process he'll be a shoe-in for president and probably deliver a red trifecta. My only consolation will be just how much I'll enjoy watching Manchin eat crow when McConnell abolishes the filibuster and does . . . well, what it is that he wants. Probably something bad with taxes or social security, but I know I don't understand what those people want at all, so who knows.
Of course, as a blue partisan, I always think things are gloomy for my side. Probably the future will be neither as good as I hope nor as bad as I fear. That's generally the way to bet.
I suppose the situation is similar to what Scott said, a problem of Moloch: that a benevolent dictator can easily fix the academic science situation, but anyone within academic science will find it almost impossible to make any constructive changes. Based on this my prediction is that: Academic Science will over time keep getting worse, progress will come slowly and with more per capita effort and unless some deep structural reforms are done there will be no way to counter-act this trend. Any thoughts?
The private sector exists for pharmaceuticals and they're well aware of how often medical and biochemical research is wrong, but they use it anyway since you need to start your massive compound screens somewhere. I'm not sure how successful they are, some think the industry is about to fail from inefficiency but most people think it's just fine, and they certainly seem able to produce results when needed.
Even the private sector doesn’t have good incentives. Pharmaceuticals is a good example of that I suppose, lots of effort to discovering adjacent compounds that can be used to extend patents. Some times the private sector has shown excellence was in cases like Bell Labs, where there wasn’t any profit motive directing research. I think this is a problem governments will have to fix
I agree that this is an incentive problem. The research of a lot of chemists has made the industry very good at modifying and rapidly screening compounds, which is essential for synthesising drugs that work, but also really useful for slightly modifying existing ones. The second option is obviously cheaper since it takes much less work, so if "we" (regulators, insurers and consumers of medicine) are willing to pay them to do that, then obviously they're going to just do that.
It's not like Pharma is exclusively tweaking older drugs though, for example research into monoclonal antibodies has given Big Pharma some of their most profitable drugs, although that's mostly because they're so expensive. This does seem like a good thing for patients with cancer and arthritis, and as the costs go down and the technology improves (for example, requiring less frequent injections) Mabs could be used to treat more conditions.
There's also the impressive speed of vaccine development, although the innovation there may be more regulatory than technological, and obviously it wasn't entirely private.
Generally, I think I'd say that the industry and its research is very effective at optimising for what will make them the most money, and that's generally correlated with what will actually help patients. Sometimes they even do things that are less profitable in order to buy goodwill (most vaccine research), although goodwill is perhaps best thought of as a currency of its own that can be earnt and spent. I think the profit motive does work in our favour in the case of big pharma, although it should be recognised that this is only because of significant government subsidies, both directly (paying them to develop treatments for diseases that wouldn't otherwise be profitable) and indirectly (via the funding of the academic research that the for profit research builds on).
As always, I feel the need to acknowledge that I have worked for GSK and so I've definitely swallowed the "Big Pharma is a force for good" Kool-Aid that they serve to all the staff, but it should at least be reassuring to know that the majority of people there are primarily concerned with helping patients - much more than academia where people are primarily concerned with creating interesting graphs to put into Nature papers. It's a great example of how private and public researchers have very different incentives.
Bell Labs was only feasible when they had a monopoly on telephone service and could charge monopoly prices. Bell Labs basically went away when Bell was broken up. It's telling that none of the descendants have founded a similar lab since.
Well the youtube piece basically talks about the problems with H Index. First he notices that H-Index correlates with top scholarship till around 2005 (when it was invented), Einstein was third in H index till 2000s, then people gamed the rankings and Einstein fell to 600 and above. This is because physics papers have 100+ publishers, everyone cites each other and so on. So to mitigate that he introduces the CAP ranking, which is[ Citations - Co Authors - Publications ] > 0. Example if you have 8 papers in a 5 yr period such that C - A - P > 0 for 5 papers and < 0 for 3 papers then CAP = 5. Ranking people based on this criteria seems much better than h index, though he points out that he would prefer rankings that don't use citations, he's not sure how they would be designed
I wonder if there's a lesson to be learned from 4chan's experience with /pol/ and the more general problem it's had with every containment board? The problem is something like "whatever we try to contain actually just multiplies in containment, and then breaks containment and spills out".
Ofc in /pol/'s case it was particularly bad since the groups and people involved were deplatformed everywhere else and became a plague on 4chins, but I wonder if there's something more general there.
If a significant portion of your userbase frequents both the containment and non-contaiment boards/subreddits/discords, they will spill over into each other.
Still, is this actually a problem? 4chan's turbocharged racism, sexism and overall -ism works as a normie repellent and keeps it from turning into reddit, and most of the community uses it ironically.
One lesson is that, on any given discussion site, any "off-topic" forum dedicated to current affairs and politics will either take over the entire site, or at the very least, take over the attention of the moderating staff.
Of course, anybody could probably have learned this from watch the evolution of any internet forum over last few decades. Also, I am aware of the irony of making this post in an politics-allowed open thread.
At this point, I think I have an approximate understanding of the CAP theorem, but I'll take another crack at it later to either understand it or figure out what I don't understand.
There's a hint that allowing for time makes the theorem weaker. What happens when there are active agents (people) trying to route around the censorhip?
Might the actual outcome be that censorship gets routed around, but the interface for doing so becomes something that most people aren't willing to use?
As far as I can recall, many containment boards actually worked. Ponies and pokemon, for instance, were successfully constrained. I'm not so sure the current state of things is about a failure of /pol/ etc. as a containment board, or the fact that our current lame discourse is so steeped in theatrical politics that it has become unavoidable. 4chan has always taken pride in contrarianism. As "woke" issues become more prominent in public discourse, 4chan will respond with a proportionately contrarian reaction.
There's definitely a brutal selection effect here, where by definition containment boards that are noticed are almost by definition ones that have failed. My experience has been that the probability of the strategy working in a specific instance is decent, but it's definitely a tool that shapes a community at the cost of sustained effort by moderators.
Ponies and pokemon were internal affairs. Some wanted to talk about them, some got sick of seeing them, both had nowhere else to go, conflict arose, new boards were created, people got segregated and things calmed down.
/b/ and /pol/ were something else, a product of outside hype resulting in millions of outsiders flooding the site in hope of participating in whatever it was currently advertised as. At that point, the boards were sacrificed to newcomers in hope they'll stay there.
Calling both of those "containment boards" is technically correct, but they're two completely different kinds of "containment". /mlp/ is a ghetto. /pol/ is a floodplain.
I want to emphasize that, to a certain extent, floodplains work. Topical boards survived - someone who imagines /a/ or /v/ as "/pol/ with anime/videogames" is mistaken (the times of /b/'s domination were much worse, at the very least). They get to defend their customs and culture, and naive newcomers who don't leave their containment board behavior and topics at the door are warded off. Still, those less-naive newcomers who learn to respect local customs will still largely come from the flood, and still get to retain their beliefs that made them join the flood in the first place, and there's nothing that can be done about that (at least when you're 4chan and, by design, can't effectively ban people).
To sum up, the lesson of 4chan is "don't let others define who you are".
For any ACX readers inhabiting or visiting Texas, the Austin LessWrong and ACX group is actively meeting in-person and will be having a dedicated far-comers meetup Saturday, June 5th.
Far-comer meetups are dedicated time for people who live too far to attend regularly, so come meet people from all over Texas. We've also had more people joining the community lately with people moving to Austin from around the country.
Is there a moral obligation to have your act together enough so that you can be reliable? Is there a good way to frame this so as not to excessively blame people with executive dysfunction?
> So it's often said, "If you want to be treated like a lady, you have to act like a lady." And similarly, "If you want to be treated like a person, you have to act like a person." That's the idea I'm trying to get at.
For a very recent example, one of my friends wanted to make a remote appointment, and the receptionist pointed her at a doctor who doesn't do remote appointments.
A receptionist seemingly has a professional obligation to route calls in a way that pairs caller with receiver such that they can actually work together and meet each other's need. Doesn't require a moral obligation to expect it.
You can try to say a person who chooses to work in some professional has a moral obligation to uphold the professional obligations of that profession. I'm just not sure you need to. The point of morality seems to be compelling correct behavior in cases in which there is otherwise no compelling factor. Be good just for the sake of it. Professional obligations don't need to be internally compelled by one's conscience to uphold them. The threat of job loss, possibly even losing a license when the profession is a real profession with standards boards and licensing, is more than enough.
Stoicism has a shit-ton to say about this topic in particular. In short, you should have your act together because it is a virtue and having virtues is good for you (as well as those in your community). You should not blame other for their *actions* because you do not and can never know their full context and frame of mind. You should encourage them, and teach them if they are receptive, but only acknowledge and not blame them for things that *you* perceive as shortcomings.
There's only a paradox to the extent that your moral system assumes that all people have equal inherent moral worth.
A utilitarian, for example, shouldn't have difficulty with the idea that some people are inherently morally laudable or condemnable. Likewise, historically virtue ethicists outside the Christian tradition have understood that you can be born with a more or less virtuous character the same way that a horse foal can be born with more or less potential for the particular virtues of horses (speed, stamina etc.).
I don't personally see anything contradictory about saying that being less able to meet one's obligations makes one a worse person. If anything, I find it hopeful as it is easier in my personal experience to medicate this sort of moral defect than it is to fix other comparable bad habits with no clear neurological basis.
I think there's an obligation not to be negligent. Like if you run into a kid on the road in your car because you were texting and driving, even if you didn't mean to, it's still blameworthy. When you drive cars a certain level of paying attention is demanded.
Perhaps another way of putting this is that willing an action requires willing a certain amount of mindfulness, since a certain amount of paying attention is required to successfully perform the action.
The difference with actions from malintent here I think is that actions from malintent are necessarily blameworthy -- it doesn't even matter if you succeeded in running over the kid in cold blood, there's no possible way going out of your way to hit them is good -- whereas actions from inattentiveness are only potentially blameworthy -- if you couldn't swerve out of the way because the kid darted out in front of your car in a split second, it's possible you COULD have reacted differently, but only with superhuman attentiveness and skill. Then again, it's also possible that you shouldn't have been texting and driving in a school zone.
I was thinking more in terms of executive function and (vs.?) making and keeping promises. Or the moral issues with large important institutions which just aren't reliable.
I think the theory works fine for promise keeping. You should never break a promise with malintent (why make the promise in the first place), but not all "lapses" are necessarily bad; their badness depends on the context.
Involved in the context I think is the nature/constitution of the agent. If you naturally forget things more easily, perhaps you should be cut a bit more slack before you are blamed. But there will still be lapses of attention -- e.g. the car accident -- for which anyone should be blamed/which we should judge as wrong, regardless of their circumstances.
This I thought was clear from the notion that successful action requires a certain level of reliability/attentiveness, such that one can't possibly act well without a certain level of mindfulness.
Does this address your concern more directly or did I miss again?
"Symonds, who will be taking Johnson’s name, has spoken publicly of her Catholic faith, while Johnson was baptised into Catholicism but renounced it for Anglicanism during his Eton schooldays, according to biographers."
Because of course he did. Something he has in common with Maggie Thatcher, who ditched Methodism because CoE is more respectable and mainstream if you're a Tory.
I was completely unaware of any Catholic connections with Boris until now. I have never seen it mentioned, not even if/when talking about Tony Blair (who did convert). This raises a ton of questions: did he formally defect? (probably not, so technically still one of ours, like Mike Pence and Neil Gorsuch in an American context).
Given that adultery and numerous affairs and illegitimate children and (allegedly) abortion for one previous girlfriend don't seem to have troubled Boris, why now the church marriage? Well, probably Carrie. Which also raises why bother now, given that she had no problem having an affair with a married man, contributing to the cause of his divorce, and having a baby out of wedlock with him? Again, probably down to "the parents would like you to get married in church".
This is what you call cultural Catholicism, I can't even call it cafeteria Catholicism as there seems not to be any "picking and choosing which doctrines I follow", but rather "doctrines? what that?" going on. I'm one of those who would agree that it's unacceptable for the likes of this to happen - Boris and Carrie get a ceremony in Westminster Abbey (because it's quaint, I suppose?) while ordinary Catholics cannot get remarried in church after divorce - and it's not because I support second church marriages for the divorced, but the blatant favouritism, string-pulling, and 'one law for the rich and another for the poor' going on. I'm sure it's very nice they have a family friend willing to be their personal priest who has this kind of pull, but while I agree with Pope Francis that pastoral care of the fallen-away is better carried out with delicacy than hitting them over the head with a brick, there's not much chance here of bringing Boris or Carrie back to the full practice of their faith.
It's not entirely impossible, but it will be a miracle.
Doesn't Boris know that the proper way for the bloke who runs the country to get remarried is to start a new church with himself at the helm, thereby plunging the British Isles into 120 years of sectarian conflict? It probably helps that none of his exes' nephews have ever sacked Rome.
I have the nagging suspicion that the unhealthiness of the modern diet is not just a side effect of capitalism-driven taste hacking; that instead the enhanced taste *itself* is a causal factor. I'm now sticking to unprocessed, unseasoned foods.
Now that you've mentioned it, I have no idea how much salt people used. Availability would have been a constraint, but did people who had access to a lot of salt have used a lot more than they needed?
Salt is one of the only minerals that we can taste and that tastes good (unlike say K, which you can taste but that's so you avoid too much of it). Asceticism is fine and dandy, but it doesn't require you to get dehydrated to do it (which is what will happen if you cut too much salt out of your diet).
There's definitely habituation. Try a 72-hour water fast. You will be *amazed* at how rich the taste of even "boring" food becomes afterward. Doesn't last, though.
I suppose I'm an outlier in this, but in general I find every additional ingredient tends to reduce my enjoyment of a meal. Rarely does something taste as good as just some chicken and broccolini or peas thrown in a pan. Even comparing sandwiches that I make myself to ones I could buy in a deli I'm always shocked at how much the herbs and sauces they add reduces the experience. When I've been living alone it's been trivially easy to eat healthily, but no one else can accept the food that I make myself.
Perhaps this group could use some funny animal stories. Or they could be viewed as evidence that we're no better at testing the top end of animal intelligence than we are at testing the top end of human intelligence.
A stunningly intelligent herding dog takes control of a lawn roomba. I strongly recommend the other links in this post-- the dog's shenanigans, the dog teaching another dog, the war with the fox.....
That is wonderful, thank you. And yes, sheepdogs are the smartest dogs. But I'm a little bit surprised that one Sheep Simulator(TM) was enough to activate the herding instincts; I thought that usually took three.
Do people count as sheep? A friend told me about a border collie which was only content when the whole family was seated at the dining table so that they could all be kept track of.
My family had a german shepherd, growing up, and she definitely did this. Anytime we were all walking together, she would circle the group and (nicely) head off any strays.
Dog instincts are baffling to me. My in-laws have a beagle they raised from a pup, and that dog is spoiled pretty rotten. Basically a lap dog, spends most of her time indoors on the couch. Never been within a mile of a hunt a day in her life. Treated squirrels and things the same as any dog, some barking, a little chasing. Nothing major.
But Beagles were bred to hunt rabbits. And when she was about 4 years old I was taking her on a walk and we came across a rabbit. I have good reason to believe it was the first rabbit she had ever seen.
And she. Went. Nuts.
She was after that rabbit like a rocket powered summer sausage. The rabbit got away, naturally, but she was obsesses with sniffing out it's hiding place. It took a good ten minutes before I could, with difficulty, pry her away from the site. She'd never reacted that way to an animal before.
How the heck does a Beagle know that it's meant to hunt rabbits? How does it know what a rabbit is enough to distinguish it from a squirrel or cat? Did they just breed together all the most rabbit obsessed dogs they could find? Where is the rabbit recognition coded into their DNA, and how exactly does that work?
Do they need to address current events directly? Otherwise (as Scott's Arabian Nights review shows) I think the best bang for your increasing-open-mindedness buck is to read old books. E.g. Homer/Pascal/Confucius >>> for range and depth than practically anyone alive today.
Can anyone recommend a good provider for pharmacogenetic testing, specifically applying to psychiatric medications (SSRIs, anxiolytics, etc.)? A family member is having issues with her medication and we're hoping such testing might identify better candidate therapies.
I used https://www.genelex.com/ a couple of years ago; the price was very reasonable and the report seemed *very* complete.
I ordered the test because no medical provider I've ever encountered believed me when I told them I have absolutely zero pain relief from opiates. Their disbelief made recovering from surgeries absolutely excruciating. I strongly suspected I had the CYP2D6 gene mutation that prevents opiates from being usefully metabolized, and it turned out I was right.
Having the laboratory paperwork made all the difference in the world with how doctors reacted to me. I went from being treated like a drug-seeker and/or ignorant hippy to a fully legitimate partner in the healthcare process. Suddenly it was important to have the testing results on my chart, and my primary care doc even had the in-house pharmacist do some research so we could be prepared with pain management alternatives in the event of a dire emergency or upcoming surgery.
As a bonus, the test also flagged a mutation that puts me at high-ish risk of having a dangerous adverse reaction to estrogen-based medications. I had no idea, but now I'm obviously going to avoid estrogen-based birth control and estrogen replacement therapies.
Your family member's mileage may vary due to the condition you're looking at, but I can say I had a very positive experience with Genelex!
Could tracking suicidal ideation/suicidality be helpful? Or might it risk drawing undue attention to suicidal thoughts - and making the problem worse? Sometimes, I feel like it could help to keep an eye on this metric for the purposes of being sure to intervene before the problem gets too bad, and to see what positively/negatively impacts it.
- Thoughts on converting microCOVIDs to micromorts
I like the idea behind microcovid.org a lot; no opinion on how sensible the modelling is but I haven't noticed anything obviously crazy. But I feel like for youngish people in countries with a large chunk of the population vaccinated, the risk budgets are pretty conservative.
Suppose you are a 25 year old with an IFR of 0.01%. At this point, when your chances of infecting a vulnerable person are pretty low, it seems fair to assume that at least 10% of the badness of covid comes from the risk of death. So say 1 micromort = 1000 microcovids. Then the "standard risk budget" of 1% risk of covid per year works out to 10 micromorts. That seems pretty low indeed to me, compared to e.g. the risk an average American takes by driving, or the Value of a Statistical Life used by governments (https://en.wikipedia.org/wiki/Micromort). But maybe I'm thinking about it wrong, suggestions welcome.
Recommendation request for the tabletop nerds out there. Our group's last campaign ended with them ascending to join the pantheon of gods and we're looking to do a followup campaign in a system that supports playing gods. Not in the "super awesome level 99 badass" sense but an emphasis on guiding mortals, remote action, nationbuilding/politicking, and everything one imagines an interventionist-but-not-physically-manifested deity would get up to.
This is a niche thing so I'm not expecting to find a perfect system, but I feel like there has to be some medieval royalty-simulator game or something that has a mechanical focus on the kinds of things we're looking to do.
I've never played it, but maybe Amber Diceless RPG? As I understand it there's a ton of implied setting, since it's an obscure RPG based on an even more obscure series of fantasy novels, but the concept of being gods responsible for abstract concepts / platonic ideals dealing with complex interdimensional politics sort of fits.
There's also Exalted, which is theoretically what you want although as I understand it a typical game is more like Dragon Ball Z. Kind of like how World of Darkness is more Blade than gothic horror in practice.
I've played Amber, and I don't think it fits the bill. The protagonists are very powerful, but they're still physical people interacting directly with the world.
Are the Amber books that obscure? It seems to me that a lot of people still remember them. On the other hand, a lot of people were born after the series ended, and there's no movie or television show, so maybe they're sort of obscure.
For the record, I think the first book was very strong, and the series started going downhill with the second book, and went downhill fast after that. The common opinion seems to be that the first five are good, but the second five aren't so good. There are people who like the whole series.
I've heard that someone asked Zelazny why he was writing those potboilers, and he pointed at his children who were playing outside. He said he needed to have [some small number] of books in print for each person he was supporting.
Amber's not obscure! They're the most accessible intro-level Zelazny. They're certainly not (IMO) his best, but... well, I think there was a major quality break between the first and second series, and even the first was descending into chaos pretty badly by the end - Zelazny was really better at plotting shorter pieces - and you're probably right about the first being the best, but potboilers still seems rather harsh.
Then again, I have a soft spot for Zelazny in general. Let's see... I'd say he's less well known than Bujold, better known than Cherryh? I'd expect anyone really into science fiction to know (of) him, but not anyone who just knows a few greats. Your mileage may vary on whether that's obscure.
Personally, I recommend A Night in the Lonesome October, Lord of Light, or any of the short story collections.
You might want to look up Nobilis. There isn't an emphasis on politics IIRC, but you're playing the embodiment of a concept so you have godlike powers.
I would see if you can find an answer through Tumblr user Prokopetz. He's a game designer who frequently answers such questions. You could send an ask or just dig through his old answers until you find someone who asked a similar question.
I’ve got this perverse desire to say something just transgressive enough to earn a one day symbolic ban from ACX. It’s kind of like the time I got a behind the scenes tour of the big cats area of the zoo. If I had really wanted to I could have reached through the bars and touched a tiger’s flank. Am I the only one who gets this sort of urge?
My transgressive comment is that Data Secret Lox is full of Trumpists and, even worse, the Leftists who suck their cocks and tone police the other leftists who might have something different to say. They banned me and that's fine. I asked them for a lifetime ban and they wouldn't give it to me, but fuck them, they suck Trump's cock.
Well, obviously I was just going for a laugh. But now that you mention it there is an unexplainable - to me, at least - reluctance to acknowledge the obvious fact that Trump is a moral cretin in the comments here.
I believe that Trump University (a scam aimed at very vulnerable people) was enough to disqualify Trump on moral grounds, but people don't seem to agree with me.
I also think that him hanging around the Miss Universe dressing rooms when he knew the young women didn't like it is a more serious matter than that "grab them by the pussy" remark, but apparently I'm extremely weird.
Not only that, but an insistence that he could only have lost the election if the other side cheated-- made before the election, so there was no evidence.
An orthodox Trumpist from the old site, who is over on DSL now, says it this way, quoting Lincoln on Grant. "I cannot spare this man. He fights."
It's a pretty common view on the right that the left is out to destroy them. If you believe that, you're going to be willing to make accommodations with people you probably otherwise wouldn't.
It's obvious and so uninteresting. Scott had a post a long time ago pointing out that controversies that get a lot of attention are always ones where it is not obvious which side is right, so there are partisans arguing both sides.
The nub of the controversy is that each side thinks it is obvious that their side is right. I don’t think it’s uninteresting that there is not a shared reality though. That is pretty frightening.
I don't think that's David's point. I think the point is that even most of his fans don't defend Trump's character. The idea that he's a moral cretin isn't controversial because it's widely shared, even by many (most?) of his voters and fans.
If I’m completely honest though I had never really considered the possibility that many Trump fans are aware his behavior is terrible and are still okay with it. I guess I’m terribly naive but that idea is really a gut punch for me. More naive thinking: But that’s bad!
I'm starting to get the impression that if party leaders of the Democrats or Republicans came out and said there was a giant hurricane coming, get to shelter now, a large amount of the non-leadership members of the other party would not believe them, and if they even got up to look out the window it would be solely motivated by trying to prove the other wrong.
This is not entirely because each simply distrusts the other for being in another party, but also as a result of each one frequently exaggerating a problem, or otherwise caught lying about a particular thing, and the habit of each echo chamber's favorite past time being to emphasize any instance of this from the other. That's both a problem of hyper focusing of an echo chamber on tearing down the other, and a problem of too frequently using and even defending absurd hyperbole's within each group. Poe's law comes into play, but maybe when people start calling people out years later for saying the world won't end if [x] gets elected (real example) when the world, in fact, did not end goes a bit beyond Poe's Law.
The glimmer of hope here is that at least the plurality of the U.S. is not aligned to either major party, as best as polls can tell, and I don't usually get the impression that other distantly separated parties (Green vs Libertarian, for instance) have this extreme of a problem with each other.
As a Libertarian, if a Green Party member tells me a catastrophe is likely to occur, and I know nothing about said person other than party membership, my priors on the catastrophe do not change at all.
I think you may be a little optimistic with your final conclusion!
(Mind, if I do know more about the person, especially if they have domain-relevant expertise, that doesn't hold. And note that at the very beginning of the pandemic, in California at least (where I was looking, at least), we didn't get the nasty factionalism - that came in later - despite our government being pretty much straight Democrat. So I don't think it's actually quite as bad as you're describing, but I also don't think the minor parties are an exception.)
I may be missing something here, but your description of your own affiliation and presumed reaction seems to support my point, rather than contradict it. The way you wrote it though sounds like you meant it to contradict?
I think the span of time specifically between the beginning of the pandemic and now has seen significant heightening of this sort of divide and distrust, and the various reactions to it I think have contributed a lot to each sides disregard of the other. That said, I wasn't in California, but for whatever reason a lot of my more republican contacts were specifically in California, and they seemed to jump on distrust of anything about the Pandemic harder than most others. That much could have just been my own exposure bias, and there were a few there that at least didn't outright deny everything as simply a lie very quickly. The severity of the distrust on that topic did seem to correlate more with just how supportive they were of Trump specifically (so the republicans that didn't like Trump much anyway seemed less likely to call it a hoax, and only seemed to care about certain government actions in the matter). I think even they got more distrustful since that time though of any Democrats. That much could just be my own exposure bias though, I did not conduct substantial polls throughout that time.
Anyone have insight on (1) the extent to which inhaled glucocorticoids (specifically Symbicort) impact the effectiveness of the mRNA COVID vaccines (specifically Pfizer)?
From 20 min of Googling all I could find is the following:
The AAAAI emphatically says, "No, there is no impact on an individual’s ability to respond to the vaccine and control of asthma is essential! There is no data to suggest that inhaled corticosteroids and/or leukotriene receptor antagonists impact on immunogenicity of the mRNA COVID-19 vaccines."
(They seem to avoid the error of assuming that no evidence = no impact: they use different language when discussing another question on which "no information could be found and more information is needed" suggesting that they are not simply saying, "oh there are no studies on this so it's not a problem." They also seem to be speaking pretty specifically about Symbicort in that answer, since later they say "Daily oral steroids may interfere with the antibody response to the vaccine based on data with other immunosuppressives and flu vaccine.") (https://education.aaaai.org/resources-for-a-i-clinicians/vaccines-qa_COVID-19)
However, drugs.com says there are moderate interactions between Symbicort and the Pfizer COVID vaccine:
"If you are currently being treated or have recently been treated with budesonide, you should let your doctor know before receiving SARS-CoV-2 (COVID-19) mRNA BNT-162b2 vaccine. Depending on the dose and length of time you have been on budesonide, you may have a reduced response to the vaccine. In some situations, your doctor may want to delay vaccination to give your body time to recover from the effects of budesonide therapy. If you have recently been vaccinated with SARS-CoV-2 (COVID-19) mRNA BNT-162b2 vaccine, your doctor may choose to postpone treatment with budesonide for a couple of weeks or more"
... For SARS-CoV-2 (COVID-19) vaccines, vaccination should preferably be completed at least two weeks before initiation of immunosuppressive therapies; however, decisions to delay immunosuppressive therapy to complete COVID-19 vaccination should consider the individual's risks related to their underlying condition. Vaccines may generally be administered to patients receiving corticosteroids as replacement therapy (e.g., for Addison's disease)." (https://www.drugs.com/interactions-check.php?drug_list=432-2530,4221-19642)
Not sure if I'm violating any copyright issue here but...
From the linked article:
"In 1947, having left Nazi-occupied Vienna for the quaint idyll of Princeton, N.J., seven years before, the mathematician Kurt Gödel was studying for his citizenship exam and became preoccupied with the mechanisms of American government. A worried friend recalled Gödel talking about “some inner contradictions” in the Constitution that would make it legally possible “for somebody to become a dictator and set up a fascist regime.” Gödel started to bring this up at his actual examination, telling the judge that the United States could become a dictatorship — “I can prove it!” — before his friends (one of whom was Albert Einstein) managed to shut him up so that the naturalization process could go on as planned."
So, I'm considering getting an electrotrichogenesis (ETG) treatment for my incipient boldness (I'm vain. I know). The people at the ETG center claim it is super effective, and judging from the before-after pictures they showed be, it seems to be the case (assuming that the pictures are true). They also cited some very impressive statistics like: 96.7% exhibit extra-hair growth, and the average hair count increased by 66.7%, as compared to 25.6% in the control group. So far, so good.
When I went to check the literature for myself, I found that the results they (and every other ETG clinic) cite come from this one paper on the International Journal of Dermatology (see link at the end), which is from 1990 and has treatment group of 30 people. The reported effect sizes are so large that you should be able to capture them in such a small sample, but still the fact that the sample is so small and that there has been no follow up research makes me a bit weary of the effectiveness of the treatment. (There is a follow up paper in 1992 that uses the same subjects as the 1990 paper and extends the treatment from 30+ weeks to 70, showing further gains in hair density, girth, etc).
According to Wikipedia, it has been approved for use in Europe, Canada and Australia, however I am not sure if this means that it's proved to be effective be the corresponding authorities, or it just doesn't kill you.
Anyway. I'm curious to hear your thoughts and experiences (if you have any) on this.
Background: MS in Molecular and Cell Biology. Next to no neuroscience. Last paragraph is best.
My first reaction is "crackpot". I could theoretically see an RNA-to-memory and back translation working, but much too slowly for human thought. Also the machinery for that would be *complicated* and it would be weird that we haven't already noticed it. Thirdly, what happens when you try to recall a memory? Generally the way cells handle information, like the genetic code, is by keeping one copy for long-term use and making copies of it every time the information is needed. So when you're wondering "what was that word again?" or trying to recall an event or image- does the brain go and make copies of every memory RNA and pop them up for comparison? That's a lot of effort for just one memory!
You could argue that there are ways to get around this, like maybe the brain has a way of focusing effort on the neuron where it knows the right memory is. A signal is directed to the right storage place, which has a signal for activating other neurons in the correct pattern to form a thought. But if the neurons are already sophisticated enough for that kind of mapping, why not cut out the middle man? Why the added complexity of the RNA system? Nature hates waste, and systems that are both expensive and complicated tend to get the boot.
It kind of sounds like you might be getting some real science from the wrong sources, though. There is in fact something known as siRNA ("silencing" or "short inhibitory" RNA), which is used to identify and defend against viral RNA found floating around in the cell. This is indeed something that is inherited across generations in some species, and is sometimes called a "memory", but it has nothing to do with the brain or thinking. It's an immune reaction, and was first discovered in worms and plants.
No, thank you! I don't have a job that lets me use my degree, so this is really refreshing. Hence my unnecessarily long and multifaceted post.
The article doesn't have nearly enough technical detail for me to seriously evaluate it. It's written so vaguely that it could be the RNA just codes for a stress response protein, which was discovered decades ago, though I assume that's not actually what it is.
Basically the "memory" they're talking about amounts to just a "danger" signal- nowhere near as detailed as what we would think of as memories (people, places, events, words), nowhere near as interesting, and probably closer to a hormone than a conscious thought.
>Why do you feel it would be too slow?
Remember that while RNA can carry a lot of information, it's not particularly easy to decode. RNA is already used to code for protein; the process uses up a fair amount of energy and can take several minutes for long RNAs (~20 seconds for short ones, supposedly). Given that on occasion you *can* remember certain facts in less than 20 seconds, an RNA-memory system seems unlikely. Moreover, the amount of information you could get off of an RNA molecule in 20 seconds would be very small; the genetic code only uses 20 (22?) characters.
>Why would it be weird if we hadn't discovered this?
There is no way that this system is as simple as RNA->Neuron firing. Generously, it would look like RNA->adapter protein->Neuron firing, or more likely RNA->adapter protein -> signal cascade (involving more proteins) -> Neuron firing. Complex RNA systems, exist, but there are very few of them. That small number isn't there because it's particularly useful; they're there because they're doing something so important that any cell that tries to tinker with and improve upon them dies instantly. So I would expect any such system to involve protein, and be very noticeable.
Moreover the machinery we *know* already works with RNA is ribosomes, which translates them into protein and which are large enough that we could see them on microscopes in the 1950s; it would be pretty weird to learn that there's been this smaller, possibly less energy intensive way of processing RNA that's just been invisible in the background this entire time.
>I don't understand why you'd need to make copies of the memory RNA
For the same reason that DNA makes an RNA copy before making a protein- to prevent the information from getting damaged. The DNA stays safe and compact in the nucleus and lasts your entire lifetime, while RNA ventures out into the cytosol and is ripped apart in about 20 minutes. Not keeping a copy might be a cheaper way of keeping memories, sure, but dangerous. Imagine waking up every morning and not recognizing your family. (Admittedly there are actually protective structures on RNA that would make it last longer, but not long enough, I think. Also the cell deliberately removes them after processing is done.)
(Also admittedly, part of the reason for this quick decay is that a. RNA is less stable than DNA, and b. RNA is usually single-stranded, and less stable than the double-stranded DNA in the nucleus. You could theoretically use double stranded RNA, sure, except that most organisms from humans to snails to house plants are on a constant search-and-destroy for double-stranded RNA, since it usually belongs to a virus.)
>You didn't ask, but...
even if RNA was a good storage vehicle, it *cannot leave the cell it starts in*, at least not at neuron-speeds. The nerves themselves would need to have some way of passing along that complexity, and mostly they are not much more complex than a light switch- they can be on or off, and they can't be on for very long. Nerves *can* usually send more than one type of signal, but not more than one at the same time, and not in rapid succession. Asking them to match RNA is really pushing their capabilities.
>The following is uninformed, irresponsible speculation
I had a strong prior against this anyway. I think of perception as like what Scott describes in Section 2, here: https://www.slatestarcodexabridged.com/Book-Review-Behavior-The-Control-Of-Perception
Where the first tier system is a bunch of neurons that recognize "lines" or "curved lines" all feeding into neurons that recognize "rectangles" or "circles", which feed into "pyramid-shaped" and "egg-shaped" neurons, which feed into "face" and "room"... etc., eventually forming a whole image. And I imagine memory as being an extra neuron at each of those tiers monitoring which neurons fire. And when you go searching for a memory, that extra neuron simply fires back the signals it noticed earlier, letting tier 1 activate tier 2, activating tier 3, etc., thus letting the memory recreate itself.
I've been told that stone tools were for a very long time and over a very wide area so uniform that humans' ability to make them might have been genetic rather than learned.
Or maybe there's only one good way to make them. Note that every time the wheel has been invented it's round. Strange coincidence?
Are you sure you mean RNA? RNA in general is very short-lived[1], with half-lives measured in minutes, except for RNA that makes up functional structures like ribosomes (and even they tend to get recycled after a few weeks). The whole point of DNA is that it's way more stable than RNA, in part because the more reactive parts are buried inside, and in part because of its built-in error-correction code.
Also, we only get RNA directly from our mothers, because the sperm contributes *only* DNA to the fertilized zygote, so far as we know.
The problem with memory being stored by DNA (if this is what you meant to say) is *how* the "memories" get turned into new DNA. It's not completely impossible, of course, since there are mechanisms for reverse transcribing RNA, say, into DNA and inserting it into the genome. Viruses have done this for millions of years, and we think there is ample evidence that some of these genes have become part of the human genome in general. But we don't know of any such capacity to convert the consequences of lived experiences to permanent changes in the DNA of germ cells (ova and sperm), which is the only way they would be heritable.
Plus we *already* have a way for the experiences of one generation to translate to changes in the DNA of the next: natural selection. Only the people with the right DNA get to survive, and if some of that DNA codes for instincts, as it surely does, then that can be considered a form of genetic "memory."
Forgot the reference:
https://www.sciencedaily.com/releases/2017/07/170712201054.htm
I don't know, I don't think I follow the arrows well enough to know what they mean. Are you saying life experience changes the proteins expressed in cells (which it definitely does), and that these can be heritable? There's plenty of evidence that the mother's experience during pregnancy (and to some extent a bit before it) affects the newborn via epigenetics, and given the way development builds on previous states, these changes may be permanent (for the child), but even here there's no heritability. If the child does not experience similar streses when the grandchild is born, the effect won't be repeated.
What I'm saying is we don't know of any mechanism that would allow detailed life experience to turn into part of the DNA in the germ cells. Basically, there's no way for an ovum resting comfortably in an ovary to become aware of what the neurons are up to, short of some very broad crude shared cellular experience like there's a shortage of glucose all the time.
Or maybe another way to put it is that we're more a colony creature that we may feel: there's a part of our bodies that is concerned with reproduction (our reproductory system), and there's a part that is concerned with survival for ourselves (e.g. our brain), but these two parts don't really talk to each other much. Ova can't yell up at the brain "Holy Christ, we're getting old here, DNA is getting methylated left and right, hurry up and find a suitable sex partner wouldja?" Neurons can't yell down "You know being short has turned out to be a real handicap, and while I'll do my best to find us a tall mate, could you maybe tweak *our* genes for leg bone length to make sure?"
What if we're praying to the wrong god, and he's actually sending people to hell just for that?
That sounds like a scientific question, so you're saying we should put more resources into science.
In particular, though, it's a *very hard* scientific question, one that may not be resolved for decades or centuries, so you don't want to put *all* resources into the task - you need to pace yourself with these sorts of things, or everyone starves to death in a month and you don't make much progress.
Is the nature of God a scientific question? Isn't practically any conception of God immaterial whereas science only deals with material realities?
Science is the process of formulating and testing hypotheses about what is real. "Which gods are real?" is a scientific question, if indeed an extremely difficult one due to the lack of obvious ways to observe afterlives or higher realms.
Are hypotheses themselves real? These seem essential to science but non-scientific bc immaterial. The scientific system can't examine itself.
What about numbers? They seem to be more eternal/stable/necessary than scientific truths and are not scientific. If they are not "real" what are they?
What about names? What about your stream-of-consciousness? Real or unreal?
sure... everything becomes obvious if we are sure which is the right God
A decision theory isn't viable if it can easily be hijacked. Shielding against the Pascal's Mugging kind of deception has positive expected utility.
What answer are you looking for? You link to yourself pointing out flaws of applying utilitarianism, so are you asking for a steelman/modification of utilitarianism that doesn’t have these problems?
I believe Meta also believes utilitarianism is flawed. I expect Pascal's Mugging (https://www.lesswrong.com/tag/pascal-s-mugging) and other critiques are already "common knowledge" (maybe).
You may not find a true-believer of utiliterianism who can provide a strong argument of the system (or be convinced by your argument).
You came across to me in your first comment as supporting util. and it's implications, then others are pointing out the flaws in the implications (which you are already aware). I'm arguing that your intention has been misunderstood.
We have no reason to believe infinite pain or pleasure actually exists, either in intensity or in duration. If we did, then there would be severe problems with the hedonic calculus, this is true.
“No reason” can still cash out to a very small probability. Then a small probability of infinite days of suffering or bliss will swamp other considerations.
One counterpoint within the utilitarianism system is why is this hypothesis privileged while the opposite hypothesis not? (God worshipers go to hell and vice versa)
A counterpoint outside the system is to say that basic utilitarianism only informs our ethics in a small amount of situations and shouldn’t be applied to the vast majority of other situations.
All this is handled within the regular objections to Pascal's Wager.
I don't understand your final point at all. Large amounts are much closer to small amounts than they are to infinite amounts. You can't draw _any_ such conclusions from an inability to calculate with infinite values. Merely large ones are still perfectly fine.
> You are being mugged one time.
I'd invite way more than one mugging if I was the kind of person who gives in to those.
I see the problem more in the source of the evidence than in the probability attached to it. In some situations, updating on what someone tells you just isn't a good idea, regardless of what is being said.
On the other hand, if there *verifiably* was a one in a trillion chance I'd end up with eternal torture, and I could nil that chance right now by choosing to die - then perhaps I should.
If you immediately send me $100, I'll ensure that you have an eternal life in heaven.
Oh, why haven't you sent it to me yet? There's a chance of you having infinite utility and only a measly cost of $100.
Thought about it some more, and 3 possible answers came to mind
1. There's zero bayesian evidence for heaven/hell in the mugging attempts.
2. Pure utilitarianism is flawed because it's susceptible to these kinds of deceptions.
3. Giving in is shortsighted; it decreases my overall ability to evaluate possibilityspace and optimally deal with these kinds of gambles. The utilitarian choice is refusing.
I favor 3 > 2 > 1.
2 is fun because it reminds of decision theory's CDT agent who self-modifies into a non-CDT agent to prepare for newcomblike problems.
Clearly infinity makes everything weird.
I'll give you an even more counter-intuitive suggestion - assuming Christian dogma, a newly baptized baby has just been cleansed of sin, and is guaranteed heaven for the moment at least. Given the size of heavenly rewards and hellish punishments (they don't even have to be infinite, merely enormous), killing newly baptized babies before they have a chance to grow up and sin is _clearly_ in their best interests.
I mean, it would be super weird to be a utilitarian if you're a Christian, since Christianity _already_ utterly rejects utilitarianism.
It does depend on how you define utilitarianism.
"The greatest good for the greatest number" is something Christians would be on board with, since God is (almost by definition) the greatest good and should be made known to the greatest number possible in order to maximise utility.
Acting as if from a position of universal love, while impossible for a sinful human, seems consistent with Christian doctrine and seems likely to approximate some form of consequentialism.
If you mean hedonic utilitarianism, then yeah, probably not, I don't think any Christians would reduce life to the pursuit of pleasure and avoidance of pain, although that's hardly unique to Christians. Preference utilitarianism is also ruled out since many people have preferences that will bring suffering to themselves and others, but again, that objection is hardly unique to Christians.
Third paragraph on love leading to consequentialism may just be my thoughts as a consequentialist, but I think it's reasonable to say that someone acting out of love would desire the best outcome for other people.
If you're going to define utilitarianism that broadly then just about everyone is a utilitarian.
Is there a moral system (actually taken seriously by someone out there) which is actively opposed to the general idea that it would be nice if the greatest good happened to the greatest number of people?
I wouldn't say opposed, but non-consequentialist systems of morality don't think that the correct way to determine the morality of an action is to consider its effects on people. The conclusions of utilitarianism are very inconvenient (all people matter equally, so self sacrifice for the sake of others is obligatory) so when people realise that they generally oppose the idea that they should be responsible for bringing about the best possible outcome for the world. Cynically, most people's morality concerns getting the best consequences for one person, themself.
I don't think it's unreasonable to define utilitarianism broadly, "utility" can refer to welfare or satisfaction, not just to happiness, and most hedonic utilitarians have very complex ideas of what happiness is anyway.
Not many want the opposite, but Kantian deontologists for instance think such things are completely immaterial to what is moral.
As C.S. Lewis points out (in the Screwtape Letters, IIRC), God clearly thinks it was worth having some people be born and grow to adulthood, even with the risk of Hell, and He has commanded us not to do the thing you're talking about, so unless you think you're wiser than God (impossible, since you started this premise by assuming Christian doctrine), you're wrong and the only question is how to figure out how.
The set of all existences that include entities that believe themselves to be the Christian God is larger than the set of all existences that include entities which are correct about believing themselves to be the Christian God.
Invoking “impossible” is sneaking your conclusion into your premise.
Akhôrahil was the one who put it into the premise - I'm merely following it where it leads.
It may be necessary here to differentiate between believing in Christian doctrine and merely believing in the stated facts of the matter. You can take a utilitarian approach to divine punishments even if the believers think this is the wrong approach. I would not automatically be convinced about the doctrinal issues _even if_ heaven and hell could be empirically demonstrated.
I don't think it works that way in Christian theology. Killing a baby would be a mortal sin, not because of the calculus about the baby's future welfare (as a soul), but more because you are assuming a judging role reserved for God. It's the same argument that prevents the Christian from condoning abortion, even of babies that will die promptly at birth, or have horrible birth defects, and which prevents the Christian from condoning euthanasia, no matter the degree of suffering averted. There is only one God, and only He can properly make such decisions, and anyone who puts himself voluntarily in that position is more or less repeating the sin of Lucifer.
Obviously baby-killing is against Christian doctrine, but that hardly changes things if I'm a utilitarian who has merely become convinced that heaven and hell exist in the matter described. If I can save multiple people from hell, even at the cost of going there myself, that's surely a noble thing from the utilitarian standpoint.
Sure, but your assumption that a utilitarian could exist (or at least that a rational person could adopt that point of view) in a world in which religion had already been proven to be exactly correct -- Heaven and Hell are known to exist, exactly as laid out in the religious tracts -- strikes me as illogical.
So far as I know, utilitarianism is where you get to if you have no direct mandate from God as to what is right or wrong. If the mandate exists beyond cavil, why be utilitarian? Particularly if it goes against the mandate?
It's sort of like imagining a person doing a carefuly Bayesian analysis on the Monty Hall problem in a world where the doors are all already open. Why would anyone do that?
Not sure? I mean, I could imagine becoming convinced that Heaven and Hell are real without starting to adhere to Christian dogma just because of that, and applying a utilitarian framework to the new situation and trying to make the outcomes for people as good as possible. If a dictator is running a torture center and a pleasure palace, sending people there according to how they act, does that mean you have accept the whole thing and do as he says just to get into the pleasure palace instead of the torture center? It might be pragmatic, but it's hardly intellectually sound.
Well, OK, but this is shaping up to be a pretty weird situation. You've got an omniscient and omnipotent Being[1] handing out infinite reward and punishment -- what's left to make it a straight-up monotheistic religious revelation? All you really need to further assume is that the Being has benevolent motives, and you've got a classic Abrahamic religion, brought to hard reality, and why would anyone *not* want to do as he says? He's infinitely wise! Surely following his wise and benevolent orders is going to get you a better outcome than winging it on your own. Consequentialism is dead, because it's pointless, there's *already* an answer to every moral question, derived by a mind infinitely better than your own.
The only way out of this situation it seems to me is to assume the Being is either cruel or indifferent, and perhaps "indifferent" when you're handing out infinite punishment and reward is inherently cruel. I mean, practically speaking, what would be the difference?
So what we've got is sort of an anti-religion: the Universe is ruled by a cruel God. Under those circumstances, is there anything else to do but go mad, so at least you might find refuge in enternal denial of the horrible reality? I mean, if you played by the rules you *might* end up thinking you've got infinite reward, but if God is truly wicked, he could change his mind at any time, send you to infinite punishment just to jerk you around. I mean, he probably would, if he's a mean bastard. You couldn't trust anything he said, so "the rules" are almost certainly meaningless. So once again, consequentialism is dead. There's no point to trying to build a personal or social morality, because the evil God will just make sure it ends in misery anyway, because he's an infinite rat bastard.
This is what I mean by the problem seeming inherently self-contradictory as posed. Once you posit infinite reward and punishment, and a Being who can hand out either, it seems to me the entire scope of moral philosophy becomes void. The only reason we *have* any such thing -- the only reason we can ask ourselves what the best course of action is and have intellectual scope for success as well as error -- is *because* we don't know any such thing[2].
------------
[1] He can hardly hand out *infinite* reward and punishment if he is himself limited in what he knows, or his power. Somebody could escape! In fact, given *infinite* time, it's pretty much guaranteed *everyone* would escape, unless he has perfect knowledge and infinite power to prevent it.
[2] Wait...did I just say Christians (or any other similar faith) don't actually know God/Heaven/Hell exists? Sort of. The way a Christian theologian would put it, I think, explaining our existential uncertainty, is that while we may be certain God exists, we cannot know for sure his Master Plan.[3] So we are still left in the dark -- we still don't know exactly what we should do to fit into The Plan, and so we have scope for the debates of moral philosophy.
[3] And why doesn't God just tell us The Plan? Because (the argument goes), the only way we can *deserve* salvation is if we work some of it out ourselves -- we *must* have scope for moral philosophy and make the right choice of our own free will, because that (we're told) is definitely part of The Plan.
I’ve come to very similar conclusions, and have been personally very terrified of hell as well. Interestingly, eternal conscience punishment doesn’t hold up especially well as a theory of what hell is like when you read the Bible closely. You’ll find that most people don’t actually believe in a tortuous hell just by how they act, as you say we should do everything we can to avoid it, and that’s obviously not the case
It would be really horrible if Christianity was true. I would prefer the existence of Cthulhu to that of Jahve - while it would be horrific to get consumed by Cthulhu, it's nothing personal, and it will be over pretty quickly, unlike with what Jahve would do to me.
There is a Christian belief that believes in annihilation as opposed to a Dante’s Inferno style eternal torture. I also believe this is more backed by the literal Bible than eternal torture.
Absolutely, I think there’s plenty of viable solutions to the problem of hell. I’m just glad I found a place that appreciates the magnitude of what it’s purported to be, in a philosophy course I remember someone pegged the payoff of hell as -50, which is nothing like getting cooked forever
Scott has some thoughts on the weird interaction between Consequentialism and Eternal Punishment.
https://unsongbook.com/interlude-%D7%92-cantors-and-singers/
I quote: "Hell must be destroyed" - you'll have to read Unsong to see how that works out in the end.
I honestly find this one of the most disturbing aspects of my own Christian faith, I tend towards the idea that we should read references to hell as "the second death" literally, but many Christians disagree with that so it's not something I can be 100% sure on.
I think the worst thing isn't that Christians think I will go to Hell, but that I *deserve* it and that eternal, obscene torture is only right and proper for the unutterable crime of not worshiping their god.
It may be you have experienced caricatures of Christian theology rather than its genuine self. The eternal torture of Hell is being deprived of Heaven, and the joy of Heaven is being united with God.
That is, there is no torture *deliberately* applied to those who die in mortal sin -- no fire and brimstone, hot pokers, eagles eating your liver et cetera (outside of medieval scare paintings and Sunday school Grimm's tales told by twits). What happens (according to theology) is that after you die all that is right and true is suddenly made as clear as day to your soul, and if you have rejected grace permanently you finally and fully understand that you have chosen to be wrong, alone, wicked, and without the greatest friend anyone could ever have. And it's the knowledge that you have rejected eternal happiness when it was freely offered to you which constitutes the torture.
It's a bit like the conventional movie plot where shallow man rejects wonderful woman who Truly Loves Him Like No Other for the bimbo, and then years later, after WW has moved on and gone out of reach forever, shallow man realizes his mistake and has to live with it.
I was a Christian for 27 years and long thought it strange, the idea I was taught, that if people discounted the idea of God because of evidence against it, then died and found themselves in a Christian afterlife, they would still reject that idea. This was the claim of Church leadership; it was BS. Stranger still would be if God judged you harshly for finding the evidence for (godless) evolution compelling during your life, even after you admit you were wrong given the new evidence. Ultimately I had to admit that a God who would behave this way was immoral and deserved no worship.
I would be shocked if the majority of Christians didn't believe in the good old fire and pitchforks punishment hell.
If "hell" is merely oblivion, that's what I'm getting _anyway_ as a physicalist. That's no way to threaten me into good behavior.
1 x infinity and (all of humanity) x infinity = the same amount of infinity, so it wouldn't matter if you saved only one person or all of them. In fact, you could offset an otherwise very large but finite amount of evil by just converting a single person to Christianity by this logic.
This isn't a pathology of utilitarianism. It's a pathology of Christianity. It's basically the justification for Spanish colonialism. It doesn't matter how barbaric we are. Future generations of these savages will now be Christian, and that is worth literally any finite cost, no matter how evil.
Yes, but then i think you have to posit the religion where only christians go to hell, which should....cancel it out?
It's less likely that this religion is true since we have more Bayesian evidence for Christianity
Whereas Christian doctrine is somewhat saturated with notions of infinity, utilitarianism is not. This is a solved problem in every actual operational deployment of utilitarian principles in real-world decision-making systems, which work by not allowing infinities.
Note that infinite elements in a sequence, and thus infinite time, doesn't necessarily imply infinite magnitudes. Plenty of infinite sequences sum to a finite total, and if they also have a temporal element, asset valuation, reinforcement learning, decision theory all introduce temporal discounting, which still eliminates infinities. For instance, even with a non-zero probability that hell is real, there is also a non-zero probability that it does not last forever and God may someday decide to let the hell dwellers back in, which means you need to exponentially decay the future suffering in accordance with the probability it will end, and your expected sum of total suffering in this case is no longer infinite.
Yeah, this isn't a reasoning of utilitarians - it's the reasoning of the people who torture suspected witches for their own good, since as long as there's even the tiniest chance they will repent...
I agree this logic has been used for evil, but I think you’re mixing two different points.
If heaven is real, then I would prefer as many people there as possible. Basic utilitarianism saying they’re all equal is a failure of that system, not Christianity.
I say “basic utilitarianism” because you could define the math differently to fit our intuitions better.
I wonder if you could make a transformation of how you calculate - for instance, assigning "one person goes to heaven" a value of +1, "one person goes to hell" a value of -1, and everything else meaning 0 (as it's completely meaningless in comparison). At this point, you could rephrase utilitarianism as "act in such a way as to maximize the net amount of people going to heaven".
Fun consequence: at this point, you will have to condemn God for not just putting everyone into heaven, the way a *moral* person should.
Condemning god risks getting on the wrong side of his wrath!
This is true - I would have to fight not to do it no matter how called-for it is. It's a risky choice to complain to murderous tyrants, after all.
Lets for the moment simplify to a case that is more well understood.
I am using preference utilitarianism with moral beings that are agents (meaning we have really good mathematical understanding of what it means for them to have a "preference").
In particular their preferences will allow us to create a utility function for an individual agent which maps a worldstate (or superposition of worldstates) onto a real number between 0 and 1, in such a way that higher numbers refer to preferred worlds.
Now it becomes clear that infinite utilities are meaningless, they are simply not possible.
When you feel like you have a thing with infinite utility, what you actually mean is that this thing is infinitely more valuable than this other thing, but this is just as well described as the "infinite" option being 1 utils and the reference option being 0!
Now it becomes clear that from a total utilitarian point of view the preferences that feel like they are infinite are not actually special in any way, and thus get no special treatment compared to normal preferences.
Now on the matter of personal rationality, you are completely free to assign utilities to worlds however you like, and if you actually only value on particular family of worlds (eg the worlds where you go to heaven) then feel free to assign them utility 1 and everything else utility 0.
Someone else is obviously free to disagree on your evaluation of preferable worlds.
Hedonic utilitarianism has larger variance in formalizations, so it is hard to make broad statements about what happens in edge cases.
One problem a Christian (or believer in any other religion, but I speak as a Christian) utilitarian faces is the fact that the Bible doesn't tell us to exclusively focus on prayer and evangelism to the neglect of all else. One solution is to argue that there are multiple things of value in this life and the next, so good done and souls saved are both important, the other approach would be to argue that the best way to optimise for souls saved is a life that glorifies God through both words and actions. Not sure what side I come down on but understanding the overwhelming importance of infinity does give you some sympathy with religious extremists - not sure if that's a productive extension of empathy or the start of a dangerous decent from Christian Effective Altruist to psychotic fundamentalist. Of course, an omnipotent God certainly could force us all to worship him, so the fact that he doesn't (assuming he exists) would suggest that we have good reasons to seek voluntary conversions rather than using coercion or indoctrination.
I don't feel that we ought to include infinity. It intuitively feels to me that it makes much more sense to think of heaven and hell as being 'the maximum joy/suffering it is possible to design a system to experience," which is so vastly greater than we can comprehend that it might be worth treating it as infinitely greater than everything in our mundane life, but doesn't break mathematics.
But other than that... all this seems 100% logical, even with that. Pascal's Wager is good logic. The best argument I've found against it is the 'uniquely privileged position' - if we imagine 10,000 people and one god, only the god has infinite power over the 10,000 souls, but, also, the odds that whoever is speaking to you is God is 1 in 10,001. But even with these two cases, this still suggests we ought to treat the promises of heaven and hell as being of far greater importance than all earthly things.
I dunno. As a utilitarian, the argument still makes sense to me. I just haven't yielded out of the feeling that I'm being conned, even if I can't figure out how.
Say what you will about Roman Catholic dogma, you have to admit that infinite reward/punishment idea has got to one of the most effective memes of all time.
I’d argue that the power gets its strength from the infinite punishment part. Think of the millions of minds that have been ‘hijacked’ by the fear of the rod that power wields.
You definitely do need to look into the history more, you seem to think the church spread solely through the threat of force, but that only works after you've persuaded large numbers of people that you're correct (through the promise of both spiritual and material benefits).
Being attractive to power is just an advantage for a meme. It doesn't make it not a meme.
I should have mentioned that I was using the term meme in the pre Internet sense. A self replicating ‘organism’ in the ‘ideasphere’. As described by Douglas Hoffsteder in “Metamagical Themas”.
Utilitarianism only makes sense in an information abundant environment. If you are assigning the value of infinity to one of your variables, maybe you are not in an information abundant environment.
In the *absence* of heaven or hell, i.e. a strictly religiously-derived eternal moral punishment or reward regime, then there can be no infinities, given that life (and the population of the Earth) is finite.
And if you *do* live in a world in which the existence of religiously-derived eternal moral punishment or reward is known to exist, how can you be a utilitarian? Everyone would be 100% deontologist.
For any system with infinite utilities, there's a better system without infinity.
One problem with infinity is that infinity times anything is still infinity. If there's a 1% chance dying now will send you to heaven, and a 99% chance living a long saintly life will send you to heaven, these options both have the same utility, and so you're indifferent between them.
As another example, if you include both positive and negative infinity, you get mathematical problems with how to add them together. E.g. if you're 99% sure going to church will help you get into heaven, but there's a 1% chance the church is actually run by Satan and will help you get into hell, you have an "infinity minus infinity" calculation.
These can be resolved by having a multidimensional utility function: utility is measured by two finite numbers (secular, divine), with any amount of divine utility being preferred over any amount of secular utility. This makes divine utility play the role of infinity. Now, if you're 99% going to church grants you +1 divine utility, and 1% sure it's -1 divine utility, you'll go to church, regardless its cost to your secular utility. Though logically consistent, this theory will essentially ignore secular utility, and raises the question why you bothered using multidimensional utility anyway.
ok, so no one has made the mathematical counterpoint I use, so I guess I'll say it. First off, I do think that our moment-to-moment enjoyment is bounded. I've never heard of anyone being infinitely happy over any open interval of time, so I'm going to assume it's bounded; let's normalize the maximum at 1 util per second just for the sake of things. This seems pretty consistent with most descriptions of heaven too; it's usually "you're doing super cool stuff for all eternity and Jesus is there which makes it even better."
This still leads to the question though of "can't we generate infinite utility if you're getting 1 util/second forever." And the answer is "sort of, but only if you have a poorly defined utility function"
This is where exponential time decay becomes important. This is a commonly used economic concept. Take, for example, the question of "why don't you stick all of your money in a super safe bond, sort of starve yourself for now, and then enjoy the 1% more next year?" Generally the answer is that we devalue future goods at some percentage. So maybe I'm indifferent between you giving me 100 apples now and 105 apples one year from now. It's an Econ 101 concept, so I'm just bringing this up so that it's clear what follows isn't "special pleading."
So assumedly you can guess where I'm going with this. You can just apply an exponential decay to your time function, (let's say 5% per year) is going to be (sum over (0.95)^n)*(seconds in a year) = 631139040 utils. This is a lot of utils, but if you're sufficiently dubious of heaven, you can multiply that by a probability and get a number that's less than torturing someone.
Also, if you believe a fire and brimstone christianity, the end of the world is coming *soon* so you don't need to stress infinite populations and appropriate discount rates for that.
Is a finite mind even capable of experiencing infinite torture? It can only have a finite amount of discernible mind states, and each mind state should probably only be counted once in the utilitarian calculus.
Perhaps God gradually expands the sinners' minds to make them perpetually open to new experiences of suffering.
That sounds like the kind of thing he would do.
Although you could also just argue for a finite level of net pain for an infinite time. Hell could be just stubbing your toe, _forever_.
Enough finite pain could be quite a bit. I've fantasized about Hitler experiencing all the pain he caused-- we're talking about geological time.
I have chronic pain. If it's hell I certainly wouldn't have it replaced by the groveling degradation Christians seem to recommend. If I were to imagine living an infinite amount of time with this pain as a thought experiment to make it more theologically hell like, I still wouldn't take the deal. But crank up the pain enough, inquisitor and I might change my mind if I could. The inquisition was fond of showing its victims the instruments of torture. Christianity has this aspect of the Inquisition built in when it starts talking about hell.
I'm a hedonic utilitarian, but also a materialist. There's no way for a finite material brain to experience infinite good feelings. The problem can't arise.
"The inclusion of infinity into" any system of organized thought "leads to weird counter-intuitive conclusions." I seem to recall the issue (infinity) being so inchoate that it drove
some late 19th century mathematicians insane. However, I'm pretty sure that if
we'd paid more attention to harmonic convergence in Complex Analysis 302
all these issues would be resolvable by various orders of infinity. Which is another
way of saying -- don't place any naive certitude onto conclusions you have drawn
regarding a simpleminded calculation that involves infinity unless you have a competent mathematician holding your hand.
Considering you forfeited your anonymity is it even a wise or ethical idea to start psychiatric practice again? I fear that train has already passed
Are most psychiatrists anonymous online? If not, I'm guessing there's no major problem here.
I might be wrong but I'm pretty sure most of them are at least pseudonymous
Well you could solve that problem by starting a new psychiatric practice under a false name. Maybe wear a false mustache and dye your hair also if photos of (the real) you have emerged.
"No, sorry, I'm not *that* Scott Alexander. Kind of wish I was, he seems like a smart guy, and I'm flattered you think I might be he. But no I'm a much less interesting person who just works at head-shrinking to pay the bills."
As far as I know, there are 2 main reasons why psychiatrists want to be anonymous online.
First as fairly obvious, and applies to most white-collar jobs in general. You don't want to cause problems for your employer, and personal opinions can get very problematic.
It's not a problem anymore in Scott's case, since he is effectively self-employed now.
The second problem is closer related to the specifics of the area - psychiatrists don't want their interactions with patients to be personal, as it can be distracting to both parties (there are more complicated issues related to that, but let's leave it at that for now).
Scott fixes that by not taking ACX readers as patients - and that's why he asks other medical professionals to refer people to him directly.
At least that's how I understand it based on his previous posts on that topic.
Sorry for the typos - can't fix them :/
From other friends that are therapists, the reason they strive for anonymity on the Internet is so that patients can't find out anything about them and thus from preconceptions that might prevent a treatment from working. It isn't required that a patient already be an ACX reader for that to happen, just that they are able to discover Scott is the writer and then take a look.
I understood that is why Scott is focusing on the pharmaceutical side of psychiatry so such concerns are lessened - I think he said that in one of his early ACX posts
I wonder how Freud dealt with the problem?
The Dark Web
I laughed.
More generally, psychiatrists and clinical psychologists becoming famous (well, internet famous) is not a problem unique to Scott, although I guess the typical solution is to pivot from practicing psychiatry/psychology to becoming a public figure, author or public speaker.
I believe Jordan Peterson still has his practice, and of course Oliver Sacks wrote a dozen or more books about his patients (with I'm sure the relevant identification details changed) throughout his career. Both of those psychiatrists are significantly more famous than Scott. Scott hasn't even written a best-seller yet.
Jordan Peterson is a clinical psychologist, and Oliver Sacks was a neurologist, so neither was a psychiatrist, in case that makes a difference, say in terms of their professional code of ethics. But yes, both of them had to deal with having patients at the same time as being famous.
If I understand Scott's statements correctly, he's sticking to medication management rather than doing significant amounts of talking.
I've been practicing for the past four months. I'm not taking blog readers as patients, and as far as I know the vast majority aren't aware of my blogging. I'm not doing more than the minimum possible amount of therapy.
Scott, have you looked into using Polymarket recently? I wrote a guide on the subreddit explaining how to use it cheaply:
Prediction Markets is something that is discussed by Scott quite frequently, so I thought I would write a quick guide on how to use Polymarket cheaply.
Polymarket (https://polymarket.com/) is a cryptocurrency based prediction market running on the Ethereum blockchain. Even though it is crypto based, you can only buy shares on USDC, a stablecoin pegged to the US dollar.
Because of high gas fees, it was somewhat costly to deposit small amounts of USDC into Polymarket. However recently Polymarket opened the opportunity to deposit USDC via Polygon, a Level 2 solution on the Ethereum protocol. That makes transfers into Polymarket essentially free.
Here's a guide to use and transfer cryptocurrencies into Polygon / MATIC: https://newsletter.banklesshq.com/p/how-to-use-polygon
Once you transfer assets into Polygon, you can use QuickSwap (https://quickswap.exchange/#/swap) to swap whatever amount into USDC to then deposit to Polymarket.
How to borrow USDC to deposit on Polymarket
NOTE: I do not advice you to do this. If you're playing with prediction markets, do it with money you can afford to lose
I use this system because I want to keep my exposure to certain cryptocurrencies while certain markets settle.
Once you move assets to Polygon, you can then deposit them into AAVE (https://app.aave.com/markets) , and earn interest while you wait to for the right market to buy shares for. Once you are ready, you can then use those assets as collateral to borrow USDC, and send it to Polymarket.
Also, you can suggest markets in the Polymarket Discord. Using the steps above, I found it cheap and intuitive to start playing with Polymarket's prediction markets.
Your apps are developed by 5 year olds?
what's the difference between polymarket and augur? Why is it necessary to create yet another token for exactly the same purpose?
They have different websites and communities, and maybe making another copy of a token might attract a more high-quality community or something.
(Also, it's not strictly necessary to use money when posting predictions. E.g. Metaculus and PredictionBook both run on Fake Internet Points, yet both make for very-high quality predictions and discussions.)
Is moral nativism noble? https://whatiscalledthinking.substack.com/p/is-moral-nativism-noble?r=8nz8&utm_campaign=post&utm_medium=web&utm_source=twitter
My view on that is "don't let the perfect be the enemy of the good". True contrition is best, but if attrition is all you can get, then it's something at least. https://www.newadvent.org/cathen/02065a.htm
Turning this into yet another "my camp versus your camp" polarisation is not going to be fruitful, I think.
I read a 1990 paper about the 'psychoticism' personality factor and it mentioned cross-assortative mating, or the idea that certain traits (and especially mental illness risk factors) are heritable because e.g. schizophrenic women are more likely to be impregnated by psychopathic men. It's a consideration that hadn't even crossed my mind before but it makes a certain amount of sense given how an abused woman is much more likely to end up with an abusive spouse, and that should have predictable effects on their offspring.
Can anyone comment on this? Is this something that's borne out by research, or is the evidence against it? Or has it simply never been studied?
Fascinating question, looks like research petered out in the early 1990s but this set of research seems quite diverse in studying the relationships of people with schizophrenia and discussed the implications of sex differences in the experience of the same illness: https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=schizophrenic+female+marriage&btnG=
Interesting to consider how the p factor plays into this set of ideas, that all mental illnesses are essentially predictive of each other. The Dunedin Study results essentially suggested that all mental illness can be explained by a single factor: general psychopathology https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=p+factor+psychopathology&oq=p+factor#d=gs_qabs&u=%23p%3DiSh99t8ZG7QJ
Yeah, I'm actually trying to integrate the p factor research into my current study, and I was looking at correlates of the "thought disorder" theory that's a candidate of being the underlying cause of the p factor. Cross-assortative mating is pretty unrelated but it struck me as interesting.
I think I should start off with some general apologies:
(1) For diverting the sub-thread on the book review post into "All Tolkien, All The Time". Sorry about that! But you know, when the Spirit moves you...
(2) Apologise to benwave. I am of a querulous disposition and I was too harsh and antagonistic about a minor matter, viz. the most acceptable way to spell Maori in English. If I am telling them they may suggest but cannnot compel people to do what they wish, I should remember the same applies to me as well. I apologise, benwave.
I think of topic drift as normal fannish and possibly rationalist behavior. It may have something to do with ADD or possibly just being interested in a lot of things.
If people had more they wanted to say about the Arabian nights they could have said it.
I may be more tolerant because I'm interested in Tolkien, too. I'm not going to mention possible topics I wouldn't have been interested in because Don't Invoke.
For some not very clear reasons your comment has made my evening. Thank you for apologizing to someone else!
IMHO, if a reviewer brings up Tolkien, all of the resulting diversion is on him :)
I've also been trying to think of a post that Scott either wrote or (more probably) linked to. It had a title like "How to create a state without even trying" and it described how the self-interested and rational decisions of people in an anarchic pre-agrarian society very naturally led towards the birth of the centrally-governed feudal society with taxes and policing. Anyone have any idea what I'm talking about?
Maybe this? https://slatestarcodex.com/2019/10/14/book-review-against-the-grain/
No, but thanks for the idea. It was definitely much longer than that, I'm sure it wasn't a book review either.
Since it's not a book review, this probably isn't it, either:
https://slatestarcodex.com/2015/03/18/book-review-the-machinery-of-freedom/
It sounds exactly like the argument Nozick makes in Anarchy, State, and Utopia, so I was trying to search on those words, but it's not popping up for me.
Nozick wanted to show that it could happen without anyone's rights being violated. I don’t think the Laurence requires that constraint.
This is correct, but the style of argument of the piece I'm thinking of is very similar to that used in this book. It just doesn't concern modern society.
My life became better when I realized complaining about complaining is still complaining.
I used to just crumple internally when I'd see someone write about how people just aren't tough enough. I'm not particularly tough, I don't know how to be different, and I'm obviously just inferior.
Then I realized I was seeing material by fragile spirits who just can't take it when they hear complaints. While I grant that listening to complaints can be wearing-- especially if they're other people's complaints rather than one's own-- it still seems like complaining about other people's lack of stoicism is a little much.
Yes, this is something I realized just within the last year that has made my life a lot better.
Change whiny behavior by modeling not whining in various ways, not by whining about it. Different ways of modeling it can be effective for different types of whiners.
"How can he not smell bad with those armpits, with that mouth?"
I see this behavior coming from the men on my father's side of the family a lot. They're a bunch of tough guys who like to complain about behavior they see as weak or fearful. And they're good at appearing tough- a lot of people fear them. But the truth is, witnessing other people's fear and sorrow distresses them profoundly. So much that they're constantly guarding themselves by signaling that they won't tolerate it, whether they're complaining about a specific person's behavior, a subculture, or a whole social movement. At the end of the day, they are the ones who don't know how to experience those emotions and not be torn apart by them. Their apparent toughness comes from their ability to cage their sorrow and fear without ever resolving them. Over time, it eats them alive.
I see so much of myself in this comment. My wife suffers from anxiety (and some depression) and every time she has an episode I just want to run from the room or tell her to stop it. And I hate that about myself. I, on the other hand, refuse to show when I’m having a rough time emotionally and can’t bring myself to ask for help when I’m overwhelmed. On the outside I look like one of the most stoic people I know, but that’s only because, to borrow your phrase, witnessing other people’s fear and sorrow distresses me profoundly. And I know that my response is to signal, through body language and other subtle cues that I don’t hide well, that I won’t tolerate it.
I want to offer a few words of encouragement. I've seen my father come a long way in this area while his brothers stagnated. I think it's because he escaped that environment where showing too much fear or sorrow puts you in a position of higher risk that someone will come along, decide you're weak, and try to take things from you. It takes a long time to adapt to a life where that's not a risk, but it's possible.
I used to take after the men on my dad's side a little, and while I tried to show sympathy for other people's fear and sorrow, it disturbed me to witness anyone really lose their shit (hence understanding how this feels from the inside). I found it helpful to remind myself that it takes strength to be with someone in their time of need. When a loved one is falling apart, that's an opportunity to show some strength by being present and offering comfort, even if you can't solve any of their problems. I guess that's the hard part- feeling powerless in the face of another person's seemingly irreconcilable emotions- but simply showing compassion and facing the fear with them goes a long way toward ameliorating those episodes of intense emotion until the storm blows over.
I hadn't thought about that behavior as part of a trajectory, so thanks.
I kind of wish we weren't playing with real money in this universe, but here we are. If your instincts are off-kilter, you're screwed.
I like the real-money metaphor!
However, on closer inspection, is there really a difference between real money and play money? Or is it just about how serious people take it? By extension, if we see life and the universe as something very serious or play, isn't that also just a matter of perspective?
As I said, I really like the real-money metaphor!
I feel like your logic is missing a few steps.
Imagine two adults walking in the street, A and B. A is barely functional, in fact, he's incontinent and shits himself. B dislikes the smell, and makes a derogatory remark.
At this point, would you say that B is incapable of dealing with weakness and it eats him alive, and thus B only has apparent toughness, and in fact, B is the weak guy?
To me, that seems obviously ridiculous. Just because B complains of the weakness of A, doesn't mean that 1) B is incapable of dealing with emotion and therefore exhibiting weakness himself, or 2) A is actually the strong person here, because they accept their weakness.
Yes, I took it to an extreme, but still - if your logic was correct, it wouldn't break.
I wouldn't say that, and no logic contained in my comment points to it by necessity. My comment is not a model intended for broad application to every example of someone who complains about another person's weakness or fear. It is a particular case of the "tough guy act" serving as a shield, and I think provides a useful perspective on why some people habitually complain about the fear or weakness they see in others. There are lots of people in the world. Sometimes they do similar things for different reasons.
> (fully vaccinated people only, please!)
This seems like a reasonable and normal proviso for social gatherings of this sort, given the current conditions.
What if the vaccine had different exclusion criteria? Would we describe the "vaccines please" differently? Would we be pressured to not have such gatherings at all?
It would matter what percentage of people were medically excluded from vaccination. It might also, for cultural-social norms reasons, matter what categories were excluded.
What if people who recently got a joint replacement were excluded from the vaccine, and that was 0.5% of the population? What if benign physical abnormalities of the ribcage excluded you from the vaccine, 3% of the population? What if pregnancy was an exclusion, and that was 0.5% of the population? What if there were a genetic marker: 15% of people have the marker, 10% of them get a very severe reaction to the vaccine, so they're excluded as a whole group, but only 40% of people know their genetic marker status?
(fully vaccinated people only, please!) It seems like an unreasonable and abnormal proviso tbh, and shows that the rational community is as irrational and scared as the general population. Government and media fearmongering works.
I'm not fully vaccinated, and it strikes me as a completely reasonable criterion. You _should_ exclude me until then.
Why? Vaccinated people are at near-zero risk (probably less risk than that imposed by the travel to the event) from un-vaccinated people. And un-vaccinated people who attend are knowingly exposing themselves to risk. Thus, the only people who are at risk are those who are choosing, for themselves, to experience that risk. Why is that an unacceptable state of affairs? People choose to participate in risky activities all the time.
The pandemic is not yet over. Multiple unvaccinated people gathering indoors is a public health risk to the community at large.
Back when nobody was yet vaccinated, everyone was excluded from participating in the meet ups.
I have nothing to do with meet ups, but I'd bet that when the pandemic is over, say August, nobody will care who is vaccinated and who is not.
I don't buy that there is a significant portion of the population that is both at serious risk _and_ unable to be vaccinated. Most unvaccinated people at this point are either extremely low risk (young people) or are choosing not to be.
Transplant patients, cancer patients, HIV positive people, etc. Some people either can't get vaccinated or, if they do get vaccinated, do not pose a strong immune system response, often because they are on immunosuppressant drugs (https://www.hopkinsmedicine.org/news/newsroom/news-releases/organ-transplant-recipients-remain-vulnerable-to-covid-19-even-after-second-vaccine-dose)
This is true of all diseases- transplant recipients have to take immunosuppressant drugs the rest of their life, which makes them vulnerable to literally everything your or my immune systems could easily fight off. Does this mean we can't gather anywhere indoors ever because we might have a cold that could be really bad for them? No. But it means that we should vaccinate people against diseases that could be only sucky for you and me but potentially fatal to them, and that we should not gather indoors while unvaccinated while there is a disease out there that there immune systems have never seen before.
This is a really small percent of the population we're talking about, but also a small sacrifice on the part of you or I. Vaccines are available if we want them. Of course there are people out there that will never get vaccines, but thankfully a) herd immunity isn't binary, the more people with immunity the harder it is to spread; b) people who have been infected also have some immunity; c) waiting also allows transplant patient's doctors to collect more data, potentially figure out an optimal dosing strategy for booster shots/timing of shots around immunosuppressants/best treatments for this subset.
This above is about transplant patients because I know the most about them, but it applies in varying degrees to other immunocompromised populations as well
Well, if I were organizing such an event, I might not be thrilled by the idea of un-vaccinated people knowingly exposing themselves to risk _in my house_. And even if my vaccinated friends and me were not in any real risk, and if the people engaging in risky behavior did so knowingly... if anything ended up happening, the event organizer would be responsible, to some degree. I would certainly feel responsible, in such a situation - and that is why I would apply similar measures, if I were in the role of the organizer.
Now, if you feel differently about the risk analysis, and you would like to host your own event, I'm not stopping you.
And, to be fair, this all depends on the size of the event, the expected number of non-vaccinated guests, and the current virus incidence rate in your area.
The organizers would _absolutely_ not be legally liable and I don't think they would be ethically liable either. And in any case, I think you are over-estimating the risk here. It's not allowing base jumping. I would be very surprised if the median age of these meetups wasn't below 40.
I just think that, with vaccine availability at where it is right now, the societal risks of meetups are relatively low (certainly lower than lots of other activities we have no issue with). To the point of the original reply: yes these things have risk, but the reaction to them is completely out of proportion to the way we act around similar risk levels. It's one thing to just be generally risk averse, but I guarantee you that individuals involved with the organization of this event have undertaken similarly risky activities on a regular basis with less thought or concern.
I'm not an anti-masker or covid denier. I got the vaccine as early as I possibly could, and still wear a mask without complaint for reasons of social signalling, but I still think that the way people think about COVID risks in general is completely irrational. And for a group that prides itself on rationality, it's a bit incongruous.
A) You're right about the median, at least for the usual people attending the meetup in question. However, one of the hosts is 76, and this does affect our risk tolerance. (A few guests are also much older, but I'm more comfortable assuming they won't come if they don't feel safe. However, we are setting our safety levels on the assumption that Dad *is* attending.)
B) Pushing back gently/honestly curious about "but I guarantee you that individuals involved with the organization of this event have undertaken similarly risky activities on a regular basis with less thought or concern." Not having done the math but knowing our usual habits, my quick guess would be "true, but only for things we value very highly." Most of us tend to be pretty risk averse in general, and in this case the more risk averse people are setting the limits.
But if you have done the math, I'd be curious what you think is about this risky that we are likely to do regularly. All I can think of is driving and flying, both of which gate some pretty important stuff.
(Not commenting on the overall discussion - I've done that elsewhere - but wanted to respond to those specific points.)
Well, that would work if people were different. But as it is....an unvaccinated person invited to go to an event, where he hobnobs with other unvaccinated people who may be carrying the disease, is not only exposing himself *but also* is family, friends, and anyone else with whom he'll come in contact back home.
Were it the case that *those* people would not resent the event organizer for encouraging something that exposed them more than they would otherwise be to the disease, then the organizer could rest easy.
1. I would not agree about near-zero risk. Highly reduced risk yes, especially of dying, but there's a not-insignificant risk of getting infected, and I don't think we know to what extent the vaccine protects against long Covid?
2. Unvaccinated people risk infecting each other in the regular manner. Surely you don't want this to happen?
Precisely. It's becoming very difficult to understand exactly what western gouvernements wants to achieve. At first it was avoiding hospital saturation, then protecting the elderly and other high risk people, but now it's apparently reaching herd immunity through vaccination. Not realistic, and i can not see what practical advantage it would have. Virus will not be eradicated as there are multiple animal reservoirs. The high target means vaccinating children or very young adults, who are not at serious risk anyway. It looks more like a political goal instead of a reasonable health measure.. .
OK, that just proves the general point. If your vaccine works, you shouldn't be worried; if your vaccine doesn't work, then it doesn't matter if those round you have been vaccinated. The rational community is behaving irrationally.
Agreed. You know... it’s a funny thing. Naming themselves “Rationalists”undermines the whole rational project. Communities already suffer from group think. But a community that labels itself rational (and therefore everyone else irrational) is really setting itself up for failure.
It proves nothing. There is a risk to yourself and others if you attend unvaccinated, increasing with fraction who are unvaccinated (or partially-vaccinated). The risk for small fractions is low, but establishing a low but nonzero limit is logistically difficult and what fraction is acceptable is highly debatable. The only stable Schelling point is 0. This also has the benefit that if a few people disregard it - it's a request rather than an enforced demand - that remains safe.
Let's accept your argument. Non-vaccinated people pose a fractional risk to others. Although the risk is fractional, it compounds into a large risk once everyone gathers together. Because risks are best avoided, non-vaccinated people should not attend the meetup.
Very well. Let's keep going. Vaccinated people can also become infected. We know this because there are a cases of vaccinated people becoming reinfected. An infected person poses a fractional risk to others. Although the risk is fractional, it compounds into a large risk once everyone gathers together. Because risks are best avoided, vaccinated people should also not attend the meetup.
The conclusion is that no one should attend the meetup. But the conclusion is clearly absurd. Reasonable people do not try to reduce risks to zero.
None of this even mentions what "the risk" is. Which, is equally important. The risk to a vaccinated person is that they might experience a mild cold. Much different than drowning in lung fluid.
I hear you. I worked at a COVID recovery center for homeless over the summer and never got sick, because I wore a good mask and we practiced social distancing as much as possible. Now I'm fully vaccinated, and I know that the risk to myself of severe disease or death from COVID (which was already extremely low, much lower than the risk of death from the amount of driving I do in a week or two) is now basically non-existent. However, I totally support event organizers requiring vaccination for entrance, especially if it's an indoor event. That's not because those people pose a risk to me, but because the continued community spread of COVID leads to deaths and suffering- some people can't get vaccinated, or don't post an immune response to the shots, ie transplant patients. Once the probability of spread is low enough/prevalence of vaccines is high enough/effective vaccination strategy or treatments are available to immunocompromised people, I don't care what unvaccinated people do, but until then, requiring vaccination is an incentive for people to get vaccinated even when there risk is basically nill, and a signal that society is valuing vaccinations in people.
You are completely missing the point. The *Schelling* point.
You're also treating risk as binary, when it is not. An order of magnitude of difference in risk is meaningful and has practical consequences for what is a prudent course of action.
That is not entirely true. You are overlooking a major public health issue, which is the emergence of variants (that may defy existing natural or vaccine-derived immunity). Viruses, unlike bacteria, can only mutate during an active infection. So the speed with which variants will emerge -- variants which, again I will emphasize *could* evade vaccine or natural-infection derived immunity -- is determined by the number of active infections that are going on, even if those infections are mild -- do not threaten the life of the infected, or are even asymptomatic.
That is, every unvaccinated person is an invitation to the virus to come try its luck at evolving, a throw of the dice humanity makes to see whether *this* time the virus will turn into something new and nasty.
So there really is a John Donne sense in which the choice to be unvaccinate might impose future costs to even the vaccinated. Indeed, it necessarily imposes even present costs, in the sense that it necessarily raises the probablity of a deadly variant emerging, and a sensible polity would take precautions against that -- research into new vaccines, for example.
1) if the coronavirus is not eradicated, but more people are vaccinated, the probability of a new variant emerging per day/week/month goes down. It's not binary, where it can't evolve with no hosts or it can evolve with some hosts, it's more of a spectrum, with the less hosts available the less likely it is to evolve new variants
2) most variants evolve in immunocompromised people, some of whom also do not post an immune response to current vaccine doses (https://www.hopkinsmedicine.org/news/newsroom/news-releases/organ-transplant-recipients-remain-vulnerable-to-covid-19-even-after-second-vaccine-dose). Getting healthy people vaccinated means less people those immunocompromised people come into contact with could be carrying COVID, lessening their chance of contracting COVID and lessening the chance of new variants evolving in them.
Is he? Your argument assumes that if everyone was vaccinated today that the corona virus would be eradicated. But while that may be true of viruses like the Measles, it's not true of Corona viruses. Covid-19 vaccinations confer short lived immunity. Reinfections will be common even in those who have been vaccinated. More transmissible variants are going evolve. But, it's rare for variants to become more virulent. And fortunately, the symptoms of those who are reinfected are mild.
So far as I know some of your key assumptions are, at best, unproved and some of the logic is inappropriate. For example, that vaccination confers brief immunity. Never seen that demonstrated, and what little data I have seen seem to suggest the opposite.
And certainly it's unusual for new varients to become more virulent, but you know, those rare events are exactly where new pandemic disease comes from in the first place. Clearly SARS-CoV-2 *did* evolve to become much more virulent some time in the recent past. That's why it's a problem now. When it comes to problems that have the potential for exponential growth (like pandemics), you cannot ignore the Black Swan events.
There are animal reservoir anyway (both wild and domestic) so it will never be eradicated. Now that the vulnerable people got a shot and soon two, the risk of covid becomes less than a typical flu, and i hope people will remember how much they cared about flu before end 2019 (i know, 2 years, a different era...the answer is: not much). Why is it different now, apart as tribe signaling?
"If your vaccine works, you shouldn't be worried; if your vaccine doesn't work, then it doesn't matter if those round you have been vaccinated."
Whether the vaccine works is not binary. It can work very well, but not perfectly, in which case you get to decide whether (in this case) "if you would otherwise have gotten it, roll a d20; on a natural 1, you still get Covid" is good enough. (I am simplifying outrageously, this is true on a population level but surely not on a personal one, but I'm not sure we know enough to tease out the personal-level risk yet.) If it isn't, you may want to require two natural ones (on people who would otherwise have gotten it - you and the person potentially transmitting to you) to get it - which is what caring whether the person you are considering interacting with has been vaccinated is.
False dichotomy. Perhaps the _real_ situation, that it works well but imperfectly?
We wouldn’t want to see unvaccinated people permanently excluded from meetups—one of the things I most value about meetups is the diversity of attendees, which includes varying beliefs about COVID. I’ve seen people everywhere in this range, some wanting in-person meetups ASAP and some on the other extreme of no in-person events until vaccination. Not surprised that the Bay Area folks are on the super-cautious side, but hopefully even they will open up to unvaccinated people in time.
Speaking as a host:
A) I agree with you about valuing intellectual diversity. This is a personal risk tolerance calculation, not an ideological purity one.
B) From my perspective, the level of Covid concern under which it is a good idea to hold large indoor events but request only vaccinated people attend > the level of Covid concern under which it is a good idea to hold large indoor events and welcome everyone. When we reach the latter level of Covid concern, we'll drop the requirement. I fully recognize that many, maybe even most people here, think we've already reached the second level, but we (the specific people hosting) are for various reasons not very tolerant of this specific risk. We're willing to host a large indoor gathering, but not (yet) to do so without worrying about vaccination. Give it another month or three (depending on case numbers - IMO, county case rate will hit zero/day in another few weeks and it will all stop mattering, but I'm not exactly a superforecaster). I have no intention of criticizing anyone who is inclined to host under different rules, but I'm not presently willing to.
So yeah, pretty much what you said, but with a bit of emphasis on "in time" ie "give it a bit".
Apparently some transplant patients don’t retain immunity even when they get the vaccine.
Well yeah. When you're taking drugs to suppress your immune system every day, you kind of expect them to do just that.
Are people with natural immunity (aka they got COVID already) not allowed at the event?
This. More specifically, is there some reason why a vaccine would be desirable or necessary in the case of a person who has already had COVID and has a positive antigen test?
It's been weirding me out that somehow the conversation has morphed into vaccinated vs. non-vaccinated, not antigens vs. non-antigen-possessing. Is there some point of vaccines beyond developing antigens? Why would someone decide to take a vaccine with a non-zero risk attached of a severe reaction if they already demonstrably have antigens?
Honestly asking.
I agree, except I didn't know this word "antigen". I hereby invoke the Sapir–Whorf hypothesis.
I'm pretty sure I got Covid, but I didn't bother confirming with a test (my wife tested positive, then I got sick in quarantine several days after my negative Covid test.) I've therefore put off vaccination to allow others to have the opportunity first. Should I be allowed into the party? Trick question, I can't go to any ACX meetups as I'm not in California.
Because if you say "antigen-possessing," even in a space like ACX, a lot of people are gonna say "Huh?"
David says he's already answered this, but I will too - immunity is immunity is immunity. My personal suggestion would be not to assume you're immune if your Covid case was 10+ months ago and your test also months ago, because I personally know one person who got it twice, about ten months apart, so I'd worry about immunity waning. But "immunity, or good reason to believe you have immunity, ideally at around the level given by the vaccine (which I assume is what natural immunity gives you)" is the actual request.
David Piepgrass, I hope this answers your question for you, if you were in California!
My guess is the #1 reason would be clinical uncertainty. That is, *did* you actually have COVID, and did you have it long enough/serious enough to develop a solid immunity? The diagnostics are not so infallible that those questions can be answered with 100% certainty, so just to be on the safe side, you might take the vaccine.
It's also conceivable that the vaccine improves immunity over what infection provides. There are situations where that can plausibly occur, e.g. the original infection was very mild and/or mostly snuffed out by the "innate" immune system, so that you did not develop much of an "acquired" (antibody-mediated) imunity. Vaccination could in principle provide valuable boost to acquired immunity, more or less because vaccination would simulate a much more severe infection.
I have issue with "long enough to develop serious immunity". If you got asymptomatic and /or short covid, what kept it mild in the first place? An efficient immune response, for whatever reason (overall better immune system, cross immunity through pervious corona infection, virus entry more difficult in your cells,...) If anything, it should indicate less risk for future infection than a hard case, regardless of antigen level. IMHO the issue is to be really sure you got infected in the first place, and is the reason why so many countries insist on at least one shot for people who already got it (in mine it's the full two shots, not surprising given how well covid was dealed with in my country (Belgium)). It's just easier not to bother pre-testing... Antigen testing was done very little during this crisis anyway. I see three possible reasons: expensive, not reliable or it was not in interest of gouvernement control that people knew of they had it or not
Nobody knows (why some people get a very mild case of COVID and others don't). That's one of the big mysteries of this bug, and will generate a lot of interesting research in the future. So the argument that you should get a shot anyway is just covering all the bases, just being cautious.
And considering the extremely small cost (and risk) of a vaccination, versus the orders of magnitude greater risk of COVID itself, I'm baffled why anyone other than certain very special categories of people even think twice about it. Do you also hesitate to get an update to your tetanus vaccine if you step on something sharp in your suburban backyard not within 100 miles of a cow, so that your odds of actually contracting tetanus are teeny? I mean, why bother? It's a waste of glucose to task the neurons with working it out.
And from the public health viewpoint, it's worth remembering the potential negatives of *not* warning people to be careful hugely outweigh the potential negative of being too cautious. People will grumble about the latter, but in the case of the former they're capable of hanging you from a lamppost. So it doesn't surpirse me at all that those people are super duper cautious.
I haven't seen this point yet in the other comments but apologies if I'm duplicating someone else's thoughts.
My sense is that we're currently in an in-between state where enough of the population is vaccinated that it's viable to have events with a "fully vaccinated people only" standard but not enough to have achieved anything resembling herd immunity.
Without herd immunity there's still the potential for exponential growth. In an exponential growth regime, each new case not only infects R>1 people in the first generation but R^2 in the second, R^3 in the third, etc. until a lockdown or other intervention occurs. So even if each person at the event is either safe or consenting, the majority of the risk is borne by people not at the event.
Once herd immunity is achieved (or exponential growth is otherwise prevented) the calculus changed dramatically. Since new cases will decay rather than grow exponentially, the risk level is approximately limited to the attendees. With risk limited to people who've given their consent, I wouldn't see the need to limit unvaccinated people from attending.
All this thinking is of course contingent on the specifics of the current situation. If only 40% of people knew their genetic status and as a result only 40% could be vaccinated then we wouldn't credibly be able to expect herd immunity in the coming months, and a different policy would likely be better. But given the current situation (no herd immunity, large number of vaccinations, herd immunity possibly on the horizon) this policy seems coherent at a minimum and IMO likely correct.
Good explanation, I appreciated this comment
I don't think the numbers bear that out. California is currently at 54.4% partially vaccinated and 43.6% fully vaccinated. I think the lowest credible estimate for prior infection is ~20%. Assuming 95% immunity for full vaccination or prior infection, 70% for partial vaccination and zero correlation between vaccination and prior infection, that's 59.2% effective immunity, R0 for baseline COVID was ~2.5; the variants are more infectious but not by such a large margin as first reported, more like a 20% increase for R0 = 3.0. Which means effective R value of 1.22 even if we assume a completely homogenous population with zero ongoing COVID-avoidance behaviors, both of which are wrong.
So, with quadruple worst case assumptions, we could tease out barely-exponential growth that would be outpaced by ongoing vaccination and even then would top out at <8% of the population infected before full herd immunity. With only three out of four worst-case assumptions, I don't think you get exponential growth at all.
Our host being high-risk on account of age, it is perhaps reasonable for him to not volunteer for an enhanced risk of infection even if he is mathematically confident that his death would be a singular event and not the beginning of an exponential chain, of course. But as a matter of broader social policy, "we can't afford to do this because exponential growth is still on the table", is probably not really on the table.
I super appreciate the analytical response! That's a really good analysis and I was definitely overestimating the risk of exponential spread. There are still two points of concern in my mind but I'm much less confident on them than I am my original point.
First, even a R value of 1.22 is a doubling every three generations. That's about a doubling every two weeks which while not awful probably isn't great. And 8% of the population isn't huge, but would still represent a 1/3 to 1/2 increase in Covid cases which isn't great (that's very much an extreme worst case scenario though). Both of those things are still dramatically different from the "unbounded exponential growth" view I had in my head.
Second, from a policy point of view I can see the validity of "wait until herd immunity" as a Schelling point for large events with unvaccinated people both to create incentives to get vaccinated and to help with coordination. That said I'm not a policy expert in the slightest and haven't seen the kind of leadership on a local/state/national level that would suggest that that kind of policy is really creating incentives.
Really conversation analysis and thanks for doing the analysis!
>Second, from a policy point of view I can see the validity of "wait until herd immunity"
What makes you think we haven't already reached herd immunity? What is the observable datum that you are waiting for that signifies "OK, now we have reached herd immunity"?
My analysis had to make every possible worst-case assumption to get an R value of 1.22, so the actual R value has *probably* already dropped below zero. And of course observed Covid cases and deaths have been in a roughly exponential decline for several months, which is about what you'd expect once the threshold for herd immunity has been crossed.
That’s a really interesting point and I really appreciate the chance to think carefully about what I actually mean when I think about herd immunity.
My first instinct would be to pick either a vaccination or vaccination + natural immunity such that the R value is < 1 even without any changes in behavior. Based on the numbers we’ve been discussing it seems like that’d be around when the people who are currently partially vaccinated become fully vaccinated.
That said I do see the value in defining her immunity based on the number of cases rather than the number of vaccines.
In general my intuition is the possible downsides of another major covid wave justifies hanging on the last few weeks to hit “full herd immunity” but also see how other data (case counts, etc) suggest we might be there already.
This is the problem Germany is having (and trying to solve awkwardly by generalizing "vaccinated" to "vaccinated, recovered or tested", but not quite succeeding because the resulting pressure on test centers has made official tests hard to get). I don't know how I'd solve it for serious events, but private social gatherings to me seem sufficiently low-scale that it's perfectly fine to go "if you don't like it, find another party".
I'm not all that familiar with psychiatric ethics so I won't comment on "people who read the blog." But why don't you just let people apply and then waitlist the extras? Or refer them to other practices you trust. That would allow you to fill up your practice more easily and provide more value to your target market. If you get overwhelmed you can either hire other psychologists or you can hire a secretary type to handle referring the ones you reject (which you'll be able to afford if you're that swamped with clients).
Restricting supply is a bad move if you want to avoid getting overwhelmed. It's a good move if you're trying to make Lorien the Gucci Bag of psychology. But that seems like the opposite of what you want.
Scott's written about this he wants the therapist client relationship to be one that isn't also a fan relationship
Okay. That's a good reason not to let people apply from the blog. The rest of what I said still holds, I think.
For the book reviews contest, not to tip the scales, but I've find myself fatigued from the format and disliking the more recent set of reviews and recalling the first handful of reviews kindly (but only vaguely).
Are others feeling fatigued from the large tranche? Things like:
- the first time someone links to SSC/ACX with a wink-nudge to the judge-author, it's funny and cheeky. After that, it feels like pandering but maybe appropriate context, so hard to evaluate
- reviews that are too long held my attention earlier in the tranche, but are now quickly disfavored
I've felt that a little bit. I don't come away with the feeling that I've learned a lot or seen something in an entirely new light.
Of course the review on Georgism was superb, and I was happy to have read and learned from it
That one stood out.
Interesting that I've heard so many people praising the Georgism book review.
Personally I didn't like that review and didn't finish it, it was too long. On the other hand, I'm fairly strongly biased against Georgism.
I'm curious whether the people who really enjoyed the review have a similarly strong bias in favour of Georgism.
Not sure how you measure bias towards an idea from the inside separately from the idea just being correct or not. For myself before reading the review I wasn't deeply familiar with Georgism but had a vaguely positive impression, and reading the review I liked it a lot.
I don't think I've ever heard a coherent argument against Georgism, which seems like a gap so if you have one I'd be interested.
I did like it because Georgism was completely new to me. Definitely my favorite review so far (incl all nonfinalists) even when penalizing for the length.
I have a positive impression of Georgism at least as far as the land-value tax, but have twice abandoned that review for its wordiness.
to give an alternative opinion: I still enjoy the guest reviews and read them all regardless of the length
I still read and enjoy them but the later ones definitely seem less memorable.
I have times when I am too busy to read them. Then if I want to catch up I have to search the generically named “book review” emails to figure out where I left off. I don’t think the reviews are added to a central location but it would help.
They are all labelled in the archive as "your book review", your methds probably the best way to keep track of what you've actually read.
Agree; I think that the only reviews that have been really worthwhile were "Progress and Poverty" and "On the Natural Faculties" possibly also the one about LBJ. We did not need to see 12+
I'm an entrepreneur who has founded and exited (some successfully) a handful ventures. And yet, I still don't know the best destination for finding co-founders/partners for future ventures. I have a few semi-evolved concepts that I'd like to explore. I talk to friends and friends of friends, which is certainly productive. But I wish there was some kind of a matching service where one could meet people and immediately jump in on some brainstorming sessions and the like. Any suggestions?
This sounds like a job for CoFoundersLab (formerly called Founder Dating).
Thanks! Signed up.
Foir what it's worth:
https://angel.co/
*For
Seems like it would be hard, as you need an extremely high level of trust to start a company with someone. The only way I can see it working if you don't have a preexisting relationship is if both parties are extremely wealthy and high status, so that the reputational costs of defecting outweigh the short term financial benefits.
Wouldn't you have the same concerns with finding long-term romantic partners? And yet, many of the traditional "trust"-based systems are being replaced by online matchmaking.
I'm perfectly willing to date a person for years before deciding whether or not to marry them. Typically, you want to launch your joint business venture faster than that.
Hmmm. What should require a higher-friction entry point: A multi-year romantic relationship or a multi-year professional relationship? I’d say it’s pretty even for me.
You can exit a romantic relationship early without much cost if it is clear it isn't working out. Exiting a business partnership early can incur a lot more cost, so it requires a lot higher trust at the start.
Doesn’t that assume that the “relationship” starts on day one? I’d say that there is often an extended period of collaboration (6-12 month is not unusual) before commitment. Also, often there are more than two founders, so perhaps the risk is diluted somewhat. Does this mean that we should change our metaphor to polyamory relationships? (Something I don’t know much about).
Here's a cute game theory puzzle.
Alice is given two bags of gold. She gets to look inside each of them and see how much it contains; the value of the contents are IID unif(0,1) (for the sake of the puzzle we shall assume that gold is infinitely divisible).
She picks one of the bags and shows its contents to Bob. Bob then chooses one bag, and Alice gets to keep the other.
What is Alice's optimal strategy? What is Bob's optimal response?
Each of them is making a decision based on their guess of what the other will decide. So if you think you can outguess the other person, you do that. If you don't think you can outguess the other person, you base your decision on a coinflip, thereby setting your odds to 50-50.
I'm assuming that the bags have unequal value, and that Bob can't tell which one is better.
Wait, can Bob tell how close the bag he sees is to the maximum or minimum value? I don't actually know what "IID unif(0,1)" means.
In that case, I think Alice should show the bag that's closer to the median possible value, in order to give as little information as possible about which bag is better. And Bob should choose the bag he sees if it's above the median, and the other bag if it isn't.
Sorry, IID unif(0,1) means that the amount of money in each bag is chosen uniformly at random between 0 and 1, and those two values are independent.
You're absolutely right about Alice's strategy. Bob's response to it then doesn't matter - he gets the same expected payoff from any strategy.
No, Bob's response matters -- he gets 0.66 expected payoff from the strategy Bullseye described, and only 0.5 from playing randomly.
I think Bob still gets only 0.5 expected payoff from using Bullseye's strategy. Half of the time the unseen bag will be smaller than the shown bag, and half the time it will be larger, and those two regions of possibilities are symetric around 0.5, regardless of whether the shown bag is above or below 0.5.
He'll get it right only half the time, but he'll tend to get it right when it matters more and wrong when it matters less.
When they're both high, or both low, he'll get it wrong, but the difference will probably be small. When one is high and the other is low, he'll get it right, and the difference will probably be large.
Aargh, this is the answer to a different, similar, puzzle, not the one I actually posted. Whoops...
I've thought about this some more, and come to the same conclusion.
Firstly, Bob probably doesn't know whether the unseen bag is high or low. (Either because Alice's strategy conceals that information, or because he doesn't know her strategy.) In that case, the unseen bag's expected value is the middle value, so he should make his choice based on how the seen bag compares to that.
If Bob uses the above strategy, and both bags are high, Alice should show the one that's less high so he'll pick that. If Bob uses the above strategy, and both bags are low, Alice should show the one that's less low so he'll pick the other. If Bob uses the above strategy, and one bag is high and the other is low, Bob will pick the high one regardless of what Alice does.
If Bob chooses at random, it doesn't matter what Alice does, so she might as well assume he's using the right strategy. (Also, he's very unlikely to choose at random if the bag he sees is very high or very low.)
When both of them use the right strategy, Bob will only pick the right bag when one is high and the other is low (which is half the time). But he choice will tend to make a bigger difference when one is high and the other is low.
I'm still not sure I understand your reasoning here's. I think you have an issue here:
> Firstly, Bob probably doesn't know whether the unseen bag is high or low. (Either because Alice's strategy conceals that information, or because he doesn't know her strategy.)
Note, it's a premise of the problem that Alice will choose the optimal strategy. With that given, we can be absolutely sure that Bob doesn't know whether the unseen bag is high or low. If he did, he'd be able to rob Alice every time, contradicting the premise that her strategy was optimal.
Where's the issue? You quote them saying Bob doesn't know whether the unseen bag is high or low, then you provide another argument that Bob doesn't know whether the unseen bag is high or low.
(Although you're argument doesn't always hold. If the bag Alice shows is high, then knowing that the other bag is also high doesn't tell Bob which he should pick. And the same goes if they're both low.)
I'm trying to understanding the overall reasoning better by picking at one particular point where we're not aligned. Bullseye says Bob *probably* doesn't know whether the unseen bag is high, but my understanding is that there's no way at all Bob will have information about the unseen bag, so there's some discrepency here about our understandings of the underlying structure of the problem. I'm just pulling at a loose thread; I don't have a grand counterargument at this point.
I didn't calculate so far but that seem likely on the basis of my analysis of the case where bags contain 0, 1 or 2 with equal probability. Alice can only have a chance of winning if one has 1 and she shows it to Bob. But Bob can pick randomly and will get 1 on average.
does Bob know whether the bag he was given is the one that Alice looked at?
Alice can look inside both bags.
Bob can always go for the "pick a random bag" strategy. So Alice can never do better than 50/50 on getting the larger bag.
Can she enforce that? Yes. She shows him the bag whose amount of gold is closer to 0.5. It's easy to see the being closer to 0.5 is an independent property of having more gold, so this gives Bob no information about whether he should switch. Therefore again, Bob has no option better than picking at random.
(An interesting follow-up I don't have an answer for: In this scenario, Bob can keep his EV but reduce his variance by always taking the bag he sees. Is there a strategy that also doesn't let Bob reduce his variance?)
> She shows him the bag whose amount of gold is closer to 0.5
> ...
> Bob has no option better than picking at random.
Maybe I'm misunderstanding what you're saying, but I don't think this is true in this case. I think bob sticking if the bag's value is > 0.5 is more optimal than 50/50.
I ran a simulation of this using the following code, and it agreed: https://pastebin.com/8zBkvD6S
Effectively, if bob is shown a bag with 0.6 gold in it, and he knows alice is using the above strategy, he knows that the other bag either has >0.6 gold, or <0.4 gold.
He also knows, based on random distribution, that those two outcomes are equally likely. As such, if he switches there's a 50% chance he gets <0.4, and a 50% chance he gets >0.6. However, if he sticks, there's a 100% chance he gets 0.6. A 50% chance at <0.4 or >0.6 has an expected result of 0.5 gold, and sticking has an expected result of 0.6 gold, so it makes sense to stick. The opposite logic works if he's shown one with <0.5 gold.
This is correct. Bob can only get the larger bag 50% of the time, but he'll end up with more gold on average because he usually wins when the difference is big and loses when the difference is small. With optimal strategy if the difference between the bags is 0.05 he'll almost never win, but if it's more than 0.5 he literally can't lose.
Hm. This is frustrating.
I think this is still Alice's optimal strategy, though. If Bob sticks to this strategy - he switches bags iff he sees a bag with <0.5 - then there's no way to beat him if you get one bag with >0.5 and one bag with <0.5. The best you can do is always beat him if both bags are on the same side of 0.5, which this strategy does.
Without having looked at any other replies, here's my attempt (although I'm not totally convinced):
Alice shows the bag with the amount of gold with the smaller absolute distance from 0.5 (if they have the same, show either one).
Assuming this is, in fact, the optimal strategy and Bob has reasoned as such, then he has been given no information about which bag contains more gold (since half of those bags of gold with more extreme amounts are larger than the one he has seen, and half are smaller). He is forced to take a 50/50 guess.
Each of Alice and Bob end up with 0.5 probability of getting the bigger bag, for I guess expected value of 0.5 each, each game.
You're right about Alice's optimal strategy, but Bob can still do better than 50/50 I'm pretty sure. Bob doesn't get any information about which bag is bigger, but he does get information about how big the difference is likely to be in each direction.
Assume Alice follows that strategy, and shows Bob a bag with value of either X or 1.0-X, with X < 0.5 in either case.
Bob knows that the unseen bag U is either in the range 0 < U < X, or 1.0-X < U < 1.0.
Those ranges are symmetrical around 0.5 and the probability distribution is uniform, so Bob's expected value of U is always 0.5.
Therefore, if Bob sees a bag with value > 0.5 he should take it, and if he sees one < 0.5 he should take the other. You're right that he only gets the bigger bag half the time, but he ends up with more gold on average because he usually loses when the difference is small and wins when the difference is big. If the difference between the bags is only 0.05, Bob loses almost all the time. But if it's bigger than 0.5 he will literally never lose.
I wrote a simulation for this (https://pastebin.com/8zBkvD6S), and the values I got in practice are:
Bob's winnings in 10k rounds always picking random: 5033.5036381671935
Bob's winnings in 10k rounds always picking the bag if its value > 0.5: 6302.920417899139
That's pretty representative. You get around 0.5 if you pick randomly (obviously), but if bob picks the bag if it's >0.5, and switches otherwise, his expected value is ~0.63/round on average.
This matches up with what you're saying, just wanted to add that if you simulate it with code, you get the same outcome.
Thanks for checking that. I might be misreading something, but I think this line is wrong?:
bob_sees = if (b1 - 0.5).abs > (b2 - 0.1).abs then "b1" else "b2" end
Two issues actually: first, should be (b2 - 0.5) instead of (b2 - 0.1). And second I think you're showing Bob the wrong bag? Since the absolute value on each side is the distance from 0.5 so if b1 has a bigger distance then b2 should be the one you show him.
Yup, you're absolutely correct! Thanks for spotting that.
Fixing that line gives me:
> bob_sees = if (b1 - 0.5).abs > (b2 - 0.5).abs then "b2" else "b1" end
Bob's winnings in 10k rounds always picking random: 5005.065702020646
Bob's winnings in 10k rounds always picking the bag if its value > 0.5: 5847.856184073208
Printing more details for a few dozen sample rounds makes it looks like updating it to the above gives what I meant there.
.. I really wish you could edit these posts so I could go back and update the link to a fixed one with a note, but so be it.
Awesome. That's that, then (I note the correction you made further down, but I'll reply here).
I think my big mistake was aiming to "get the bigger bag" rather than "maximize the gold payoff". In retrospect it's obvious that for-sure payoff of >0.5 is going to be better than E(payoff)=0.5 in terms of payoff, even if it doesn't expect to give you a larger bag than the other person.
Now I wonder, with Bob's better strategy in mind, does that call Alice's strategy into question? If that's Bob's optimal strategy, then that means Alice is forced to get robbed every time it's a very-small bag vs a medium-large bag, which makes me uncertain whether there's not some way to improve her strategy.
I think it's still Alice's best option. With Bob's strategy, he's guaranteed to always win in cases where one bag is above 0.5 and the other is below, no matter which Alice shows him. At least with this strategy Alice is also guaranteed to win when both are on the same side of 0.5.
Bob just has a better position overall- the power to make the final choice just counts for more than Alice's advantage in information.
There can be no strategy that favours either Alice over Bob, or Bob over Alice. This is because each can always pick at random which means their expected gains will be the same.
If you think you have found a winning strategy, then remember that random selection by the other player will render your strategy ineffective.
(This is similar to how there can't be a winning rock-paper-scissors strategy because one player can always play randomly.)
If this were a simple odds-evens game, you'd be right. But it's not; the game is rigged in Bob's favor. If he uses the right strategy, he'll come out ahead no matter what Alice does.
The fact that Bob can see one of the bags means that he can guess which one is better. And the bigger the difference, the more likely it is for his guess to be right.
Ah! I see what you mean
The strategy if showing the bag closer to 0.5 gives Bob a utility of 0.5 + 1/12, whereas the lower bound on his utility is only 0.5. (by the strategy of picking one of the bags at random)
Is there a proof that this strategy is optimal? What is the source of the problem? Thanks.
This week I wrote about my experience learning Polish, with a highlight on the number system:
https://denovo.substack.com/p/making-polish-count
I think it's a good case study (no pun intended) on how things can seem easy if you are used to them, but they are really quite complicated for outsiders to learn.
Perhaps a good linguistic text explaining of the origins of this mess might help. Remnants of https://en.wikipedia.org/wiki/Dual_(grammatical_number) are probably a part of the puzzle.
Oh yeah, some plurals in Polish (e.g. ręce, where the plural of ręka "should" be ręki) are definitely remnants of Dual.
We're fully aware our number system makes no sense. At least there's enough redundancy that you'll just sound weird but perfectly understandable if you mess it up. There's basically no chance of passing for a native speaker if you aren't one anyway, and we're used to foreigners butchering the language - it's still impressive when someone makes an attempt.
BTW, reading should be way, way easier than speaking, but some of Lem's works are a wild ride even if you do know Polish. They're worth it, though!
Does anyone have experience dealing with industrial-grade procrastination when it comes to doing work? I'm currently doing some freelance content-writing work, all of which is tedious but manageable, and every time I try to do any of it I spend >90% of my time scrolling through social media, random websites and the like. Sometimes it's educational (reading ACX and other similar content) sometimes it's absolutely mindless cycling between the same six websites (it brings to mind the slot machine gamblers in that book review the other day). I'm not incapable of focus generally – I can spend hours reading something or working on a particular interesting project – but focus on demand really eludes me.
Deadline pressure helps, but I find myself waiting until the deadline is imminent before getting anything done, and even then not always managing to meet the deadline. I'll mentally put it off ('the deadline is 6pm, but they won't actually check until the morning so really the deadline is 7/8/9am the next day') and it can do bad things to my sleep schedule.
I don't think this is an unusual problem (and I'm aware it's made much worse by the work being boring and working from home on my own schedule); does anyone have any useful insights into working with it? I'm theoretically open to trying Modafinil/similar, but my worry has always been that I'll take it and then fixate on something irrelevant instead of the work that needs doing.
Sounds like ADHD pry all the usual solutions for ADHD
I'm open to the idea of having a named condition, but looking at the list of traits associated with ADHD I don't think I fit. From this NHS list (https://www.nhs.uk/conditions/attention-deficit-hyperactivity-disorder-adhd/symptoms/):
carelessness and lack of attention to detail – not remotely
continually starting new tasks before finishing old ones – no more than typical
poor organisational skills – not at all
inability to focus or prioritise – yep
continually losing or misplacing things – almost never
forgetfulness – rarely
restlessness and edginess – not generally
difficulty keeping quiet, and speaking out of turn – not an issue
blurting out responses and often interrupting others – not an issue
mood swings, irritability and a quick temper – not at all
inability to deal with stress – I can handle it reasonably well
extreme impatience – no
taking risks in activities, often with little or no regard for personal safety or the safety of others – for example, driving dangerously – generally not
I had this problem when I was finishing my dissertation, but I did kinda overcome it. (I finished the dissertation, anyways.) This is what I did, ymmv:
Step one was to train myself to track when I was actually working and when I wasn't. I started keeping a document where I'd sort of "clock in" and "clock out" of work, in a very fine-grained way, so any break or distraction wouldn't count as clocked in. With some practice I got to a point where I could actually remember to "clock out" before I began scrolling through random websites. So then I had a way of knowing how many minutes I'd spent actually working out of a long day sitting in front of my computer, and could meaningfully set myself a goal of actually-working for, e.g., 4 hours that day. To some extent, just tracking it helped me to be more mindful and do a little bit better.
Step two was to get some sort of enforcement mechanism in requiring myself to achieve however many hours of work a day. A good reward for me was to play video games, so I made a rule that I couldn't play any video games until I had done my 4 hours of work. If I finished early, I could bask in the joy of video games for the rest of the day with a blissfully clear conscience; if I got stuck procrastinating, I wouldn't get to play video games at all.
It was an important part of this to make the work requirement fairly modest. If I'd said I needed to do 8 hours of work on my dissertation, that would have felt so hopeless that I wouldn't have even tried. Sometimes I set the goal at just 2 hours a day. It was still better than trying to do everything in the two days before I have a draft of a chapter due.
When the above approach started failing because I started cheating and playing unearned video games (!), I signed up for Beeminder (https://www.beeminder.com/) so I could arrange to be charged actual money for such failures. Ridiculous, but it did often help when things were really bad.
I had a very strategy for my dissertation (+1 for beeminder!) except that I pegged it to pages written or re-written if I was editing that day. I think I wound up paying once or twice but eventually the habit was engrained enough and I liked seeing how far above the redline I was once I got in a good productive groove. I think this probably took a few months, YMMV.
Other things that helped:
1) Getting into the office way earlier than anyone else. Like if your advisor usually gets in at 8:30, be at your desk writing by 7:30. Anecdotally more of my labmates preferred working around midnight to 2AM, but I think the principal (of avoiding our advisor when we needed to focus on writing) was the same
2) Telling everyone I knew that I was approximately 6 months from defending. One of those days it was even true
3) Finding collaborators to help with writing a review article that wound up being the first chapter of my dissertation, same for the 3rd chapter come to think of it
4) Realizing that I had 0 desire to stay in academia and the only thing holding me back from getting a real job grown-up job was finishing this fucking dissertation so let’s stop screwing around already
Good luck! If I had to do it over again, I probably wouldn’t. Even so I do feel proud of getting a PhD and if nothing else, no matter what, every time I go to a bar to order a drink I _always_ get…just what the doctor ordered ☺
Yes! I have been having to fight this problem my whole life and I think lately I've gotten pretty okay at it. My advice is super specific to *being me* so it might be useless to you. That said:
1) Identify the root cause of what causes me to get distracted.
2) Come up with a plan that addresses those that I AM ACTUALLY CAPABLE OF STICKING TOO
3) Do everything in the most brainless most simplest way possible that plays to my natural strengths
Root causes:
For me it's three things: task switching, anxiety, and not knowing what I'm supposed to do this very second. It's easy for me to just to a monotonous or boring task for HOURS uninterrupted, but it's hard for me to deal with "what should I be doing right now?" or coping with feelings of inefficiency and etc.
So I realized task switching was very detrimental in particular because it PRODUCED the other two root causes -- I'd be anxious I wasn't getting enough done no matter what I was doing (because I had two tasks to do, so working on one felt like slacking off on the other) and I would constantly feel like, wait what should I do right this second? Before I know it I'm scrolling reddit and refreshing hacker news for the 80th time.
Solution:
Give every major task an ENTIRE DAY unto itself. So if I'm splitting my month between a freelance project and my own project, I set up a weekly schedule. Monday and Wednesday is freelance client gig. The other days are personal work days. ALSO, crucially, I schedule client work days for days I have meetings with the client. Big interruptions like meetings (especially that cut the day perfectly in half) need to be mitigated, so at the very least I can make sure they're relevant to the project I'm working on that day.
This way, even if I have down time, I don't get caught in a secondary loop of what I should be working on. Off task? Okay, brainlessly go back to the ONLY THING I'M WORKING ON TODAY rather then get caught in an eddy of "task a? task b? task a? task b?" oh gee now I'm on reddit.
The next thing I need to deal with is the difficulty of starting things. That's the entire challenge. If I start something, I will do it. If I don't start something, wow, I have 5 hacker news tabs open and where did the hour go? So to crack this nut, I will do the "pomodoro technique" which is just start an egg timer and try to get as much done as you can before the 30 minutes is up. Except I don't care how much I get done, I'm just starting the timer so that I will actually get started on my task. When I start my task, I also give myself the goal of doing the simplest, dumbest, most brain dead task that could conceivably be considered part of the task. That gets me started, and an hour later I'm deep into the work.
So that's what works for me. No clue if it works for anyone else.
I think I have been naturally headed in the direction you are talking about (due to the "wtf this very second?!?!" thing / deciding between 2 tasks) but I really appreciate seeing your analysis of root causes and direct solutions here. Lightbulb moment. Thanks!
For writing in particular, I have two pieces of advice. (No guarantee they'll help: your mileage may vary, etc.)
1. In high school, my best friend was the editor in chief of our school newspaper. He had a great quote: "The thing that takes the most time when writing stories is procrastinating." We fool ourselves into thinking that procrastinating is okay because we're accomplishing other things, but if you mentally reorient to count all of that time as spent, unproductively, on what you're supposed to be doing, it can help to improve focus.
2. I wrote my PhD thesis a lot faster than most people. My key insight was that a crappy sentence on the page was worth a lot more than an elegant paragraph in my head. Editing is a lot easier than writing, but you have to get the first draft down first. Commit to writing e.g. 1000 words a day, and don't get up from the chair until you do. Disconnect the internet if necessary. This isn't especially novel - many prolific writers work this way.
Yes!! I have the exact same problem--I procrastinate by reading pretty much useless things online (ACX, news, commentary, etc)/scrolling through social media until I can procrastinate no more. For me, it's because if I don't plan out a project and time-manage myself perfectly, I hate the feeling of not having any time to do it and the feeling that I'm irresponsible for not time-managing myself well, so I just push it off more and more which obviously backfires in the end. And it's not that I don't like working on what I'm putting off--so then I get really into it right before it's due.
The thing that works for me and for why I procrastinate so badly is to just sit down and write/do the work--don't do anything but the task or else I'll get distracted and lost in whatever random thing happens to interest me. It's really easy for me to do more mindless things without procrastinating because with those I can multi-task. But for things where it demands all of my attention, what I do is sit down, give myself a lot of time, and do it. Make it a little more mindless and get into a rhythm in whatever you're doing so that it's harder to get distracted. Obviously if you're writing you need your brain more than doing dishes, but I think that if you get out of your head a little bit and just get into a flow in what you're doing it's a lot harder to get lost in the internet/anxiety.
Tim Urban’s blog Wait But Why had a series of what I found to be highly relatable posts on this behavior:
Part 1 https://waitbutwhy.com/2013/10/why-procrastinators-procrastinate.html
Part 2 https://waitbutwhy.com/2013/11/how-to-beat-procrastination.html
Part 3 https://waitbutwhy.com/2015/03/procrastination-matrix.html
I think I must have come across some of these before, but they really hit home. Thanks for the links!
I’m still one hell of a procrastinator myself but the thing that helps is tricking myself. Let’s say if I’m being honest with myself, today will be like most days and I will do nothing. I simply find the SMALLEST possible action that will get me closer to my goal. It can be getting out of bed. Often, it’s, I’ll check my email, then I’m allowed to go do whatever I want. I can literally open it, look at it, and close it and achieve more than some days. Luckily, something will often catch my eye on the email, and I’ll get to work. Sometimes there’s some false starts, but at least there’s now 0 barrier whatsoever to checking my email, it’s actually kinda wearing on me NOT to check it. I know what I’m doing to myself, obviously. But it can be really effective because for me the hardest part is starting. People will say “just start” but that can feel a huge commitment when you’re laying in bed. I say “think about starting”. “What’s the smallest action I could take to start?”. Or “let me go look at the page I would go to IF I were to start right now, no pressure at all”
Basically, give yourself as many chances as possible to get sucked into working, but tell yourself there’s no commitment whatsoever
This was a good post and a good discussion thread but now I'm going to actually do my work. Thanks.
You just had to stall enough until doing the work now became imperative. That always works for me too. :)
The art of procrastinating by learning how not to procrastinate, not to be confused by procrastinating by working on some other task that you also have to do, both solid forms of meta-procrastination, judoing your dumb brain into getting stuff done™
https://www.amazon.com/gp/product/B00KWG9M2E/ also very handy if you're looking for additional judo moves at a later date
Fully realizing (visualizing; paying attention when they happen) the payoffs of each alternative decision has helped me.
Of course this only works of you'd truly prefer getting the work done earlier. If you'd do the work in time and afterwards feel restless or guilty for doing nothing, then your brain is *right* in choosing procrastination.
For me, modafinil works really well. But it depends on the dosage. 1/2 of a 200g modalert pill is too much: I get exited and very creative about lots of things and can't concentrate anymore. But 1/4 pill, 4 days a week, enhances my concentration- and the satisfaction I get from work.
More specifically, my work-procrastination problem has the form of a very high perceived mental exertion for a lot of tasks, and modafinil lowers this exertion significantly.
Thanks, that's helpful to know.
What helps me:
Making notes. A piece of paper (each task gets a separate piece) and document everything related to the task. "This needs to be done: ..." "This is known: ..." "I need to ask X about this: ..." Whenever I lose focus, I look at the paper to find the next step.
Having a colleague to discuss things with. Discussing is in some sense a form of procrastination (while you are talking, you are not working), but the kind that allows you to work better afterwards. If you have the right person to do pair programming with, it's a miracle.
Walking away from the computer. That means I am not working, but I am also not browsing Reddit. Maybe a quite productive way would be to keep walking, and when I get an idea what to do next, return to the computer and do it, then walk away again.
What hurts me:
Like larsiusprime said, working on two projects at the same time, so even when you make progress on one, you feel guilty for procrastinating on the other. Even better, while you are making great progress on one, your boss interrupts you and asks about the other.
Nth for "I'm in the same boat." I'm a lifelong procrastinator who recently started a personal project, and the lack of any externally imposed deadlines hit me like a truck. What ended up helping me the most so far were two things:
- Coworking with a roommate who WFH (more generally, changing your environment so that slacking off is punished)
- Credible, meaningful positive reinforcement. Find something specific you want and resolve to not get it until your goal is done (in my case, it was a game that came out kind of close to my self-imposed deadline). Alternatively, find something to give up that will sting a bit, like sweets, until you're done.
(disclaimer: I also have work projects I'm procrastinating on that I don't care about as much. I haven't tried applying the above solutions to them yet and I'm also looking at finding some stimulants to help out)
I'm not sure if this is a direct continuation of a previous discussion, or a standalone comment, but I don't find this argument at all compelling. Many people have been called racist because they decided to investigate the issue rigorously, or because they took a position like "I don't know if there is a difference, but someone should investigate to see".
The reason is not that they investigated the issue per se, but that the reason they felt the need to and the amount of convincing it took for them to reach their conclusion. See e.g. Murray burning a cross when he was a teenager. This sets a very high prior on "maybe Murray really didn't like blacks to begin with" and makes it hard to believe that he came to his conclusion about HBD from dispassionate investigation of the evidence.
Now if a genetics prof like Graham Coop suddenly argued for HBD or called for an investigation, that'd be more surprising
What are you even trying to argue? Are you trying to argue that this is always the case? That is what it seems like you are saying, but it makes no sense.
In any event, because it is a thing that we don't know yet is a good enough reason. No one needs any other to do science. And plenty of people have been attacked *before* they reached a conclusion. Merely for doing the studying (or advocating it be done). I find your explanation lacking.
"When people throw racism accusations, they don't blame you for coming to the conclusion that black people are genetically dumber - they blame you for being easily convinced by bad/unrigorous evidence (e.g. HBD),"
The vast majority of people who will throw racism accusations do not know enough about the evidence to conclude that it is unrigorous. They blame you for coming to the conclusion, then assume the evidence must have been unrigorous.
Note that your behaviour in this very comment -- just asserting without evidence that all HBD is self-evidently unrigorous and that any academics supporting it are discredited, then assumes from this unsupported assertion that your opponents must have ulterior motives to not accept this -- is exactly the behaviour I would expect from wanted to rationalize a disgust reaction to a conclusion they never seriously considered, and not what I would expect from someone who actually had valid counterarguments to my "racist" beliefs.
I don't really get this. I don't have a disgust reaction to HBD being true. I'm not a public figure whose brand is being very woke. I'm not on social media. I'm not black. I don't partake in virtue signaling since I'm posting anonymously on the ACX comment section where I know my comments will be badly received. I don't know my IQ but my education (STEM major followed by bioinformatics PhD) basically ticks all the boxes for IQ correlates or what have you. Therefore, I have no incentive for "pretending to dislike the evidence" or whatever the hell. The worst that'd happen if I'm wrong is that I'm proven wrong on the internet, the horror.
In other words, in Moldbuggian rhetoric, I am completely removed from power incentives and I can therefore argue for or against HBD without any of the passion that comes with arguing for the sake of looking good, and only the kind of 'passion' that comes from anonymously arguing on the internet, which unless you're the extremely online kind or a teenager isn't that high to begin with. And even with all that, I'm sorry to say - but HBD is bullshit. It's basically unheard of in actual genetics circles. Their proponents are either frauds, or not geneticists, often both.
The common retort to HBD being completely discredited by the genetics community is that everyone knows it's true but is afraid of "uttering the truth" or something, which basically amounts to a giant conspiracy theory with no one at the top - Moldbug calls it the cathedral because granting big nothings a name gives them more solidity. But it's not that great a retort as people think it is, since you can use that argument for basically any position at all.
I don't know what you mean by "HBD." The claim that there are genetic correlates of race as conventionally defined is obviously true — 23andMe routinely tells people their ancestry on the basis of genetic data. If you mean the claim that blacks have a lower average IQ than whites and East Asians a higher, that's a question for statisticians, not geneticists. Since we know that IQ is in part heritable and know that races differ in the distribution of heritable traits, it's genetically possible. The question is whether it's true.
The best evidence I have seen against part of that claim was offered by Chanda Chisala, who I think shows that African genetic IQ cannot be nearly as low as some of the HBD people claim, using statistical not genetic evidence. I summarized his arguments in a blog post (http://daviddfriedman.blogspot.com/2021/04/race-gender-and-iq.html). He was able to make those arguments because he took the position seriously and looked at its implications.
Can you explain what you mean by HBD, what information geneticists have that shows it to be false or what other reasons you have to regard it as "bullshit"?
I'm sorry if I've misread your motives, but I still don't know what you hope to accomplish by just asserting that HBD is bullshit and all its proponents are frauds with no further elaboration. Especially since you acknowledge that you expect your comments to be badly received.
A lot of the priors are kind of obvious, some people just need a peer-reviewed excuse to challenge the orthodoxy.
Ironically, every single one of the obvious priors can be explained by dysfunctional cultures shaped by outside pressure (I believe the fashionable term is 'systemic inequality'), so they don't validate HBD garbage in any way - but fundamental attribution error is a thing.
If you are convinced by weak evidence, then there are *two* possibilities: either you had strong priors that lie in the same direction as the new evidence *or* you had no priors at all, in which case weak evidence is better than no evidence.
The only situation in which weak evidence doesn't lead to a conformant conclusion is when you have strong priors *the other way*. Id est, if you are criticized for accepting weak evidence on lower black IQs you are being told you *should* have a strong pre-existing assumption that there is no such thing.
It's pretty reasonable to hold a completely uninformative prior on the relationship between most phenotypical traits and proximity of some majority of one's ancestors to the equator, so granting that, the prior on whether specific extant races of humans should be genetically pre-disposed to being more or less intelligent should be that no, they are not. So you should have strong priors in the other direction, seemingly for any traits that aren't directly related to heat or light, i.e. nostril size, ear size, average body hair, skin color.
How do you know what is related to heat or light, and what is not? Heat is related to energy, which is related to metabolism. Light is related to all kinds of chemical reactions in the body, which have an impact on mind (consider e.g. the seasonal affective disorder). Hypothetically, any cell in the human body could be impacted by this. Unless we have a full model of human metabolism, such questions can only be answered empirically.
If I were an alien knowing nothing about humans, I could make all kinds of hypotheses. Perhaps it would seem logical to me that black people living in Africa are the smartest on the entire planet, because they need to spend the least energy on heating their body, and therefore have more energy left to all other things, including cognition. Also, their days are longer, therefore more inputs and more learning. Or perhaps I would assume that intelligence is proportional to body size, so taller ethnic groups have higher average IQ. (Is it considered racist to assume that some ethnic groups might be taller than others, on average?) I might also assume that countries with higher population density have smarter people, because more interaction between humans leads to stronger selection pressure.
That's silly. The natural prior is to assume that people who *look* different are different in all kinds of nonvisible ways. That's why it hardly surprises us that black people are generally better at sprinting or jumping, e.g. that a white man hasn't won the gold medal in the Olympics since 1980 -- nobody is jumping up in astonishment at that fact, or suggesting it must proceed from a massive social conspiracy.
Oops, missing words. "that a white man hasn't won the gold medal in the 100m in the Olympics since 1980."
No Asians, either, I assume?
Correct. Every gold medal winner from 1980 on has been black. In fact, no white man has even qualified for the 100m final since 1980.
And at that, the 1980 race was a bit of a fluke, since the US boycott mean most of the world's best sprinters (American, and black) could not participate. If you ignore 1980 the last white winner of a 100m gold was Valery Borzov of the USSR in 1972 at Munich.
Banned for one week (border of symbolic and meaningful) for accusing other people of having bad evidence and sloppy reasoning without giving any justification or making the object-level argument.
I am being especially harsh because this commenter unprovoked started a thread about HBD, which is reputationally costly for this site and for the people who respond to it. I am grudgingly willing to tolerate this cost for interesting discussion that discusses socially or scientifically important/interesting aspects of the topic while trying its hardest to keep temperature down, and I won't ban people for well-intentioned attempts at that, but this isn't it.
I'm a professional software developer who's interested in getting into teaching myself some hardware stuff. As a somewhat arbitrary goal I'd like to (eventually) get to the point where I can disassemble some commodity electronics, and then solder together a super basic computer that can boot CollapseOS[1]. Obviously there will be many baby steps between now and then.
Any recommendations on how to onboard my skills as quickly and uncomplicatedly as possible? Background: very familiar with high level programming languages (Haxe, PHP, Python, Javascript) decent experience with C/C++, no experience with hardware other than being able to assemble a PC and vaguely knowing what a capacitor and a resistor is, and watching my college roommate solder stuff now and then.
[1] https://collapseos.org/
One approach would be to grab a Raspberry Pi and cable it to a breadboard. If you are starting with a Windows, Apple or Linux box a good way to start with serial port fundamentals. Serial Port Complete is a good intro.
thanks!
You will need USB Port Complete for most up to date machines. Same principles as SPC. Nothing like moving bits through a UART to get closer to the hardware.
With the Raspberry Pi cabled to a breadboard you will be turning on LEDs, responding to switches being thrown and converting analog signals to digital pretty quickly. Also a lot of good books available on Pi projects.
Also check out Code Project website. They have a lot is samples under IoT projects.
There is this really cool video series by Ben Eater where he explains how to build a programmable 8-bit computer from scratch (or very basic components, rather) and explains how all the parts work[1].
He's also got another series where he describes building a computer based on a 6502 microprocessor, how to program it, how to attach various peripherals[2].
Might be similar to what you want to do. But I think he does assume some basic electronics knowledge.
[1] https://eater.net/8bit
[2] https://eater.net/6502
The collapse os person sounds amenable to asking if you can find contact info. The classic learning resource for doing something like this in NAND to Tetris, which comes out with a 2nd edition in a few weeks: https://www.amazon.com/Elements-Computing-Systems-second-Principles/dp/0262539802/ref=sr_1_1?dchild=1&keywords=nand+to+tetris&qid=1622474567&sr=8-1
There is also a free course on Coursera doing the NAND to Tetris project: https://www.coursera.org/learn/build-a-computer?
Tetris is probably simpler than CollapseOS, but you want to start as simply as possible anyway. This will probably take a while to really learn.
We've had some amazing guests recently on the Futurati Podcast.
Andreas Schleicher is something of an education policy guru, and has studied the educational systems of nearly every major country to look for common, successful policies:
https://www.youtube.com/watch?v=Ewr0OZ5Imjw
We were joined by famed futurist (and highly entertaining podcast guest) Brad Templeton, who set us straight on autonomous vehicles and chatted with us about his approach to evolutionary ethics:
https://www.youtube.com/watch?v=TFsU0NUjW9U
Elaine Pofeldt is a journalist who has made a name for herself studying 1-person businesses that bring home seven figures in revenue. If you're interested in EA or giving what you can, you should probably familiarize yourself with her case studies:
https://www.youtube.com/watch?v=YuvLyiSS9tU
The founder of HowStuffWorks, Marshall Brain, also teaches entrepreneurship and has begun writing about existential risk:
https://www.youtube.com/watch?v=fEAg20AXovQ
We also spoke with Mark Ryan about his work in deep learning and whether DL will be enough to get us to AGI:
https://www.youtube.com/watch?v=UPK4BAtlG2A
I came across a nice writeup on [How to Hire a Cartoonist to Make Your Blog Less Boring](https://mtlynch.io/how-to-hire-a-cartoonist/). Does anyone know of similar writeups on How to Hire a UI Designer to Make Your App Less Boring?
Please don't.
It might be a good idea to hire a UI designer to make your app more ergonomic, more accessible and/or improve user workflows. But making the UI less boring for the sake of being less boring a) quickly loses its novelty, b) wastes system resources and c) tends to get in the way of what the user is actually trying to do.
There was one ACX meetup already in the east bay, announced on the Bay Area LW mailing list, and it should be weekly.
If blog readers here don’t know, all bay meets ups get announced on this mailing list (https://groups.google.com/forum/#!forum/bayarealesswrong) linked from this page (http://www.bayrationality.com/). All those I’ve been to have been a lot of fun, highly recommend
I got the sense that "having a community of nerdy peers for ones kids" has been a consistent wish for a subset the comments-section community. (from discussions with marshwiggle.)
So I wanted to share that this one extracurricular math program exists, and for the first time, is accepting applicants from across the U.S.:
"Adventures with Mr. Math: To Infinity and Beyond!" https://mrmathonline.com/forms/
It only occurred to me to tell you guys about it when I was amazed by the careful effort put into the "testing/filtering" process. (excruicating-sounding-to-me; I hate assessment) Full disclosure: I know this b/c I begin working there in-person in fall!
Also, it occurs to me that if any of you have a kid achieving well in Middle School (MathCounts, etc.) or H.S. math competitions, (AMC, AIME, etc.) and you're looking for resources, I might be able to connect you with something useful. (I am not really known to hardly any of you, but marshwiggle seems to think I'm good at math coaching stuff. That's my rec.)
I don't want a community of nerdy peers for my kids.
I want a community of smart and sensible but also cool and attractive peers for my kids.
What do your kids want?
Where do you find smart/sensible/cool/attractive kids?
A lot of SSC and Unsong readers enjoyed Douglas Summers-Stay's Genesis tautogram
https://llamasandmystegosaurus.blogspot.com/2017/05/alpha.html
Jim Hays later wrote one with a different starting letter
https://calvinballing.github.io/saga/
and now I've written one with still another starting letter (thanks to Jeremiah for publishing it):
https://godexperiment.org/beginnings-an-alliterative-rewrite-of-genesis-11-23
See also
https://en.wikipedia.org/wiki/Tautogram
The request that people only come to the South Bay meetup if vaccinated doesn't apply to small children, who are as always welcome.
Also, I would appreciate it if people who know they are coming let me know by email: ddfr@daviddfriedman.com. That's so we will know about how many we are feeding. If you decide at the last minute to come you are still welcome, but we would like at least a rough count in advance.
Also, "have already had Covid" counts as the equivalent of vaccination for my purposes. Reinfection is possible but apparently unlikely.
It's interesting that the center of effective altruism isn't open to remote work for non-networking positions.
"(fully vaccinated people only, please!)" So Mr. Scott is a segregationist of the covidiot church. Good to know.
That's me, the host whose house this is in, not Scott. He was just passing on my request.
Viliam is symbolically banned for one day for interacting with trolls.
It looks to me like, when a banned user is subsequently unbanned due to their ban expiring, the ban-triggering comment disappears. (I see both Villiam's comment here, and Freddie's comment earning the first registered ban, as blank. By contrast, I see radrave's comment behind a "User was banned for this comment. [Show]" message.)
This person is banned for a hundred years (I can't figure out how to ban indefinitely).
I miss the red banhammer text.
(Minor point: since people so far have generally lived for less than a hundred years and then died forever, this is basically the same as an indefinite ban. However, this might change if somehow people start living longer due to biotech or whatnot. The world'll probably be *very* different in 2121 anyways.)
Even if people live for more than a hundred years, it's unlikely this blog will, in it's current form.
If both I and ACX are still around in 100 years, I vow to post a comment talking about this line of discussion ;P
This length of time makes me uncomfortable. The register of bans page doesn't say anything about sentencing guidelines.
https://astralcodexten.substack.com/p/register-of-bans
The page has a comments policy now. (It's the same as the old commenting policy of SSC.)
Scott,
Would you be able to explain why the gathering request attendees be double vaccinated? My understanding is that breakthrough cases are rare and deaths from said cases are extremely extremely rare, like way less than car accident death rare.
The vaccine apparently reduces the risk of catching the disease by about a factor of twenty. Being vaccinated and associating with other vaccinated people reduces it by about a factor of 400 — less because vaccinated people take fewer precautions, more because a vaccinated person who gets Covid is likely to be asymptomatic and so not very contagious.
At this point, in this area, anyone twelve or over who wants to get vaccinated can, so I thought the requirement was a reasonable one. As I tell my kids, redundancy is your friend.
It doesn't apply to small children, and the doubly part assumes one of the vaccines that requires two shots. Someone who has had one shot of a one shot vaccine is also welcome. So is anyone who has had Covid already and hasn't gotten vaccinated — perhaps I should have specified that special case as well.
David, Thanks for the reply.
Are there any numbers on the odds of a serious covid case after getting fully vaccinated? My understanding was that it is a lot lower.
I personally had covid already but I still intend to get vaccinated. I’m interested in this topic because I was under the impression that a fully vaccinated person has successfully lowered their risk of a serious covid case to the level of a risk that would be acceptable in any other context. I haven’t actually seen the numbers so this could be totally wrong.
I don't know the specific number, just wanted to describe my risk model. If you are exposed to the virus, the number of viral particles you initially get has an impact on (a) the chance you get infected, and (b) the expected severity of the disease. So it is true that for vaccinated people the chance is lower, but it is also true that the chance increases when spending a lot of time with an infected person indoors. Asymptomatic people also spread viruses, by breathing and talking.
Not sure about exact numbers, but I would guess that the difference between "fifteen minutes in a park" and "five hours inside a house" is at least as big as the difference between being vaccinated and unvaccinated. So on one hand you decrease your risk by being vaccinated, on the other hand you increase your risk by exposing yourself to many people. Both effects are real; they point in the opposite direction.
To calculate the actual risk, you also need to multiply this with the base rate of infected people in population. Ten seconds of googling suggest that the probability that a random person in California is currently infected is 10% -- that sounds too high, I probably made some mistake. Also, the number needs to be adjusted, because a typical ACX meetup participant is not a random person. (Not sure how exactly to adjust. Being smart is good, but being a contrarian can go both ways.)
Risk = general population sickness × number of people × time spend indoors × vaccination coefficient
The vaccination coefficient is good to have, but it is only a part of the equation. This may be an unpleasant fact for the individualists and libertarians, that sometimes what other people do has a greater impact on your life than what you do. Vaccination changes the result maybe 10× or maybe 100×, the behavior of the population as a whole could make a 1000× or maybe 10000× change in your personal chance of getting infected. (Trivially, if no one around you has covid, you can do whatever you want, get no vaccine, and still avoid infection.)
So the whole story is not "get vaccinated, then you are safe", but rather "get vaccinated, then you are slightly safer, and if many other people also get vaccinated and the prevalence of covid in population drops to near zero, then you are safe". Sadly, many other people don't give a fuck, or respond with indignation.
Much safer, not slightly. If I understand the numbers correctly, vaccination means approximately that if I am in a situation where, if unvaccinated, I have a 100% chance to get Covid, I now have a one in twenty chance - 5%. So I would have to get covid 20 times over to get it once, on average. Apply that one-in-twenty to everyone you're dealing with and since you both have to critically fail, you now have a 1 in 400 chance compared to dealing with the same people with neither of you vaccinated.
This is why I consider requesting vaccination important enough I'm currently willing to do it.
Also, 10% seems crazy - could you have somehow gotten the numbers for all cases recorded during the pandemic? My county's rolling average is currently ~30 new cases a day for a county of 2 million; cumulative cases are only 119,094 but compared to, say, LA we were comparatively mildly hit. Checking, total California cases look like a bit under 3.7 million and population is 39.5, so yeah, I think you got cumulative total, and most of those people should be immune right now.
I think being clear that those with immunity to Covid-19, whether natural or from vaccine, are welcome. There are a lot of people who have had Covid-19 at this point and it is bothersome that the government and media are ignoring clear science to try and shame then into getting an unnecessary vaccine. So it's frustrating to see the same blind spot repeated by rationalist.
Just a side-note from someone across the globe with zero skin in the game.
1 shot + 14 days of any vaccine is just as effective as 1 shot of J&J.
Are you asking why I'm saying fully vaccinated, instead of just having received one dose of a two dose vaccine?
The first reason is that someone who received their first vaccine dose one minute ago is zero percent vaccinated, and "fully vaccinated" is an easier criterion than "either fully vaccinated, or received your first dose long enough that it should work, which I would have to look up how long it is".
The second reason is that AFAIK first vaccine dose efficacy is potentially pretty low against some variants, see https://www.bmj.com/content/373/bmj.n1346 . I haven't checked to see if this is true yet, or how common those variants are, but I would like to err on the side of caution.
The third reason is chumra (see https://en.wikipedia.org/wiki/Khumra_(Judaism) ) - people are going to fudge it, and so I would rather be strict enough that even when they fudge it a little everything is still okay.
Thanks Scott. I was actually asking if vaccinated people need to worry at all about being exposed to completely non-vaccinated people. My impression was that very serious breakthrough cases of Covid are extremely rare to the point of being at a level where that amount of risk would be acceptable in any other context. That being said I don’t really have the numbers so that could be completely wrong. Also, one could argue that a jab or 2 is not a lot to ask from someone.
I enjoyed reading the article on Khumra
I know I'm coming to this a bit late, but I'm wondering what everyone's thoughts are on meal replacements? It seems like a cheaper, faster, and *arguably* healthier way to get nutrients than traditional food. Am I wrong?
I'm in Canada and I'm vegan so I've been thinking of replacing 3/4 of my meals with Soylent Original: https://www.soylent.ca/products/powder-original (and creatine) after I do some blood tests. Other meal replacement products aren't really available here. Anyone else doing something similar?
It seems deeply unlikely to me that replacing 75% of your meals with soylent could plausibly result in a worse result than an ordinary diet. Replacing 100% of your meals, maybe (or 99%). 75%? Maybe if the diet you're replacing is really rigorously healthy.
If your remaining 25% are carefully-chosen, maybe. If you're getting random takeout? Not so much.
Plenty of people have 75% of their meals be things much less healthy than Soylent.
The thing about micronutrients is they're micro.
Humans have survived across a very wide variety of staple diets over millions of years, without major problems. If the Inuit can survive on seal meat for 75% of their diet, people can do fine on Soylent for 75% of theirs.
Yes, maybe Soylent is missing a few things that you need very small amounts of. Maybe it is not wise to eat a diet that's more than 95% Soylent. But 25% of your diet is plenty of pick up plenty of the nutrients that are so unimportant in quantity that we haven't managed to identify them.
Soylent lacks epistemic humility. There have been several times in the past that we have, as a society, confidently believed we knew everything that was nutritionally important. We were wrong about vitamins and micronutrients, and so we probably still are.
Therefore, sourcing all nutrients from pure substances (mostly powders) is most likely lacking in some important but currently unidentified nutrients needed in small quantities. If you want a meal replacement, make it from whole foods which have the right balances of macro- and micro-nutrients. (This is substantially more work. Tough luck.) I have a recipe, but I'm reluctant to share because the person I got it from (also responsible for MealSquares) was planning on marketing it and took down the public copy for that reason. My understanding is that ground sunflower seeds, oat flour, and marmite are a good set to start with for the micronutrients.
What are you eating now?
I've been trying to get my meals to follow the Canada food guide: https://food-guide.canada.ca/en/
I'm also vegan so I take B12, iron, and D3 supplements. I haven't been keeping this up though because it takes up more time than just ordering or making something less nutritious. Trying to make a change. . .
I'm interested in enough detail so that it's possible to make a judgement about whether meal replacements would be a good idea.
Or possibly do meal replacements incrementally. If you're healthy at 25% meal replacements, try 50$.
That should be 50%, of course.
Anyway, here's something that I find to be reasonably priced and very low effort.
Freshe salad toppers. They're canned tuna with some veggies, available in four flavors which aren't terribly different from each other. The spicy one isn't very spicy.
A pack of ten costs $40.
They aren't huge meals-- 260 calories/can and they're better with some fresh salad.
Anyone else have suggestions for the best money/effort combinations for real food?
https://freshemeals.com/?utm_medium=cpc&utm_source=google&gclid=Cj0KCQjwktKFBhCkARIsAJeDT0iXy7q-llnQVL5YrJEs4cZZ99D3_xLaBTx2R43szsGOyjBrpCJCbt0aAnWEEALw_wcB
Thinking meals are only about getting the right intake of each nutrient sounds like a sad life to French me.
I have tried for years to do something roughly like this, mostly for convenience. I just don't want to have to devote very many of my scarce waking hours to meal planning, prep, cooking, and cleaning. Soylent itself didn't work out super well. Whether or not it has theoretically most or all of the nutrients needed to keep a human body alive, it isn't very filling and it isn't very satisfying. One serving is 400 calories, and an average sized male is going to need at least 6 of those to operate at maintenance, assuming you don't workout or have an above-average level of activity, which of course, you should if your intention is to be healthy, in which case you'll need even more.
For what it's worth, meal services have gotten reasonably good. Plenty will deliver pre-cooked, pre-packaged fully contained meals made with entirely whole food ingredients rather than synthetic replacements like Soylent. I've been trying The Good Kitchen for about five months now and it's largely working out. I still don't think you can replace everything this way, but I go for 28 meals a week, which ends up being about 1800 calories a day. I need about 2700 for maintenance, so usually one additional handcrafted, self-cooked meal on top of this is sufficient.
I have no complaints with Good Kitchen itself, but they use UPS as a delivery service and UPS is extremely unreliable. They've missed shipments, frequently damage the box, split into multiple shipments with one arriving days late. They're supposed to deliver each Friday and I still haven't gotten last week's shipment yet.
https://www.thegoodkitchen.com/
Classified want ad (thanks Scott for okaying this)
I work for a tutoring company (NYC based but almost all remote) with rather well-heeled clients. We're looking for someone who can teach, roughly, the sort of topics you'd cover in the second half of a programming degree, but with a more practical bent - web design, making mobile apps, maybe game programming. The target students would be high school kids who did well in AP CS and the like and are looking for help with next steps. The pay should not disappoint. This is contracted work but there's the option for full time and benefits if you do well.
Please have:
* Bachelors, preferably from a name-brand school.
* Experience teaching advanced CS one on one for several years.
* 3+ tutoring clients/families who can recommend you.
Plusses:
* 7+ years of tutoring experience
* Familiarity with elite NYC private schools (think Gossip Girl) and their culture.
* Ability to teach other STEM topics (physics, chem, etc).
Sounds like this might fit you? Email a resume to patrick24601 [at] gmail dot com .
Think Gossip Girl? Does this translate into “experience with humoring entitled brats?”
Who in their right mind would sign up for this?
There is not enough money in the world to take on some jobs.
Because this is likelier to be noticed here than on the Arabian Nights post, here is a DeviantArt post of a map (with copious explanatory footnotes) depicting a world where magic is real and the modern world is gripped by a Cold War between the Idealized Middle East of the Arabian Nights and the Idealized Post-Roman Empire of the Arthurian Mythos: https://www.deviantart.com/quantumbranching/art/Arthurian-Romance-vs-Arabian-Nights-736657777
The entire account, Quantum Branching, is just maps of various althists and fictional scenarios, and seems like the sort of thing that readers of this blog might find *very* appealing.
Yudkowsky is a god king around here but this sort of thing is embarrassing, just pure "I'm the REAL genius" acting out. He does that a lot but is never criticized for it within his community. https://mobile.twitter.com/ESYudkowsky/status/1398785849785868292
It's not necessary that there be a high probability of near term superintelligent AGI for it to be worth spending a lifetime trying to avoid the problem. A prominent EA (forgot his name) put the probability of all X-risks combined at 1/6 this century, a probability high enough to easily justify spending lots of resources to prevent it, considering the near-infinite consequences of failure. Indeed, if every EA devoted all their donations to x-risk (which of course we don't) it would be a severe underspend at the societal level, if, that is, one takes the long-termist view (mainly because there are not many EAs relative to the size of society).
The link given waves its hands a lot regarding AI. Also, its probability estimates are hilariously bad.
I mean, let's go through the terms in his estimate for AI:
1) 50% on AGI that surpasses humans in all activities before the end of the century. This is directly lifted from Ord and I have no particular issue with it (I'm not well-versed enough to say yea or nay).
3) 10% that an AGI will have a reason to usurp humanity. This seems generally absurd; there will be more than one AGI created (likely hundreds if not thousands), and it only takes one to pass this criterion. Note that I'm dealing with #3 first because the desire has to come before the attempt.
2) 10% that an AGI attempting to gain control of society will succeed at usurping us. This despite the definition in #1 that the AGI surpasses humans in "essentially all human activities" (direct quote). Politics is a human activity. This amounts to a rote invocation of some "vital spark" that humans have and AIs can't have.
4) 10% that a rogue AGI that has successfully overthrown humanity will kill us or rule us forever. Scratch the second part of that disjunction, he's saying that >90% of the time a *rogue* AGI that has *defeated humanity* will decide to spare us. Seriously?! I'm not saying there aren't goals or hard limits that would accomplish this, but assuming 90% of rogue AI conquerors have the specific ultimate goals to avoid extinction (and it does need ultimate goals; there is no instrumental advantage to keeping humanity around once the AI is totally independent and unthreatened by us) is obviously insane.
500 years even at the slowest reasonable projection of computing capability growth is far more than necessary to get em-capable hardware. "Innovation is declining" doesn't cut it; 50 years of 20th-century innovation would be plenty, and we're not *that* much slower. (Fortunately, AI-capable hardware is very likely even lower in requirements.)
But even if it takes a millennium, that doesn't really matter. Silicon-substrate expansion is orders of magnitude faster than lighthugger generation ships, so even if we start trying to colonize other stars, at *most* a few systems get biological beings before the tech catches up and silicon overtakes them.
Is that supposed to be a convincing argument we shouldn't take it seriously? We're speculating about future technology here, this is already science fiction by definition.
The self-replication is of the entire ecosystem of factories, mines, refineries, power generators, and construction robots, not of each individual component.
The seed ship contains a fusion reactor, a store of refined materials (including fuel for said reactor), and a number of mobile construction robots. When it reaches its destination (i.e. some place with mineable stuff - icy bodies are good because you can make stuff out of organic materials and there's lots of hydrogen for your fusion reactors), the robots (powered by the fusion reactor - they can plug in to recharge) construct mines to mine the materials, refineries to refine them, factories to construct additional construction robots, and additional fusion reactors. When there are enough materials and construction robots available, new seed ships can be constructed.
Addendum: the mines, refineries and factories are of course powered by the fusion reactors, as are the computers controlling everything.
By hydrogen as fuel I mostly mean hydrogen-from-which-deuterium-can-be-refined, as I don't want to assume protonic fusion.
The clear path is 'literally just emulate an entire brain'. It's probably grossly inefficient, but all it needs is hardware. That's the "em-capable hardware" I mean.
JSK said "50 years of 20th-century innovation", and suggested this would in fact happen within 500 years (i.e. a prediction that we are going to advance at least 10% as fast as we did in the 20th century).
Note that improvement at House's law (processor performance doubling every 18 months for identical power consumption) for 50 years would be 11 billion times modern performance.
Wait, when you say "silicon-substrate expansion", do you mean building things or travel time?
Travel time certainly won't be "orders of magnitude" shorter, as the acceleration limits of biological organisms are not especially relevant in interstellar flight (accelerating at 1g with a 99.99%-reflective laser sail, if I've done my maths correctly, requires a rather non-negligible heat dissipation of 146 kW per kilogram of spacecraft - 1g with a rocket of interstellar-capable exhaust velocity is far, far worse, as AM-beam has theoretical limits of about 0.3c exhaust velocity and 60% efficiency - and in any case accelerating at 1g compared to infinite only actually adds like 2 years onto a trip).
"Science LARPing" is very strong, needlessly provocative language, but... honestly seems pretty justified.
The replication crisis never really ended, and that was just the primary way that traditional science fell short of its *own* standards.
Publish or perish is not conducive to real investigation of important scientific hypotheses, and everybody knows it.
The completely malfunctioning journal system makes distributing knowledge slow and bad, and everybody knows it. Plus it ties the whole edifice to the even-more-badly-malfunctioning university system. (That I won't say everybody knows, not yet. But *you* certainly do, you wrote a book on it!)
So it's needlessly provocative. But not wrong. Par for the course for the Notorious E.S.Y.
Are you sure you're not part of "his community"? You're a fairly regular commenter here for one!
But that particular tweet doesn't seem that embarrassing, or embarrassing at all. Have you read Robin Hanson's posts related to that specific topic? Eliezer's point seems eminently reasonable to me with that context.
How did you manage to interpret a statement agreeing with someone about how people being able to completely ignore an incredibly common criticism of their field they would almost have to know about is indicative of uncritical social conformity as being about how "I'm the REAL genius"?
Similarly, have you ever considered that convincingly arguing that some specific trend exists generally requires actual evidence and not heavily implying that evidence exists ?
Implicitly accusing everyone working in the area of being fake scientists ("science LARPing") defrauding the public is a take of substantial temperature.
Sure, they're wrong, and vincibly wrong, but if the standard is "never be vincibly wrong and never teach error" then we're all going to hell, myself and Eliezer included. A standard of "don't maliciously teach error" is more reasonable, but I'm not seeing malice here.
I don't share your negative reaction to the tweet. If you have an argument for your distaste of it, you'll have to actually make it. (Is it the tone? Is it associating a supposedly-high-status group like academics with a supposedly-low-status activity like LARPing?) And is your problem even with Yudkowsky's tweet, or with Hanson's critique of scientists studying astrobiology?
Having read Hanson's article, his critique sure seems plausible, i.e. any books about the far future which claim that "aliens we meet would be much like us, even though they’d be many millions of years more advanced than us" sure sounds fundamentally flawed to me, likely irredeemably flawed.
And in the first place, what's your prior on far-future predictions by any random academic discipline to be anything but nonsense, given how extraordinarily hard such predictions are to make? Given insufficient incentives to make correct predictions, and a lack of tight feedback loops to form good intuitions etc., I'd expect a discipline's performance to be more like that of pundits than like that of Tetlock's Superforecasters.
Finally, we could take another step back and ask to which extent actual contemporary academia (vs. idealized science) actually deserves its high status, and hence to which extent whether they're (un)deserving of the tweet's ridicule.
I've seen plenty of rightful criticism of him. Vive lèse-majesté.
I'm symbolically banning you for a day for this comment. I'm against people saying "Look how stupid this is!" without any explanation of why they disagree, with a possible exception for things that are really stupid in a hilarious way, which I don't think this is.
I really love that you publicly explain why you're banning people, and the fact that we can still see the offending comments.
I imagine anything particularly offensive that wouldn't be useful as an object lesson just gets deleted
But also yes I'm enjoying the bans and the reasoning as well :)
I already posted this on the subreddit, but I figure it's worth posting here as well.
For a couple months now, I've been trying to integrate Scott's posts about trapped priors (The Precision Of Sensory Evidence and Trapped Priors As A Basic Problem Of Rationality) into a more sophisticated theory about how my own OCD works and how to deal with it. I ran into some issues with this, as the model seemed to be incomplete in some way. Over the past month or so, I've compiled my "findings" into a long-ish blog post on my own Wordpress page (which I rarely use). I know this is a longshot, but I'm hoping that Scott sees it and is able to give some feedback on the model I've come up with. Feedback from others is welcome as well, especially mathematicians for the game theory related stuff in section III.
The essay is here: https://ingx24.wordpress.com/2021/05/27/the-trapped-priors-model-is-incomplete-guys-its-time-for-some-game-theory/
To clarify: This is about much more than just my own struggles with OCD, although that's certainly part of it. This is an examination of the trapped priors framework in general, and has a lot of broader applications for e.g. explaining self-serving bias and raising the question of how to counteract it without overcorrecting.
Hey, thanks for the response. I'm actually not so sure that my explanation of wishful thinking and yours are different - mine might just be putting yours in a more technical way. I'm not totally sure about that though, as I'm almost certain my framework is missing something.
I actually have never been properly part of the rationalist community - I've just been exposed to epistemology and metaphysics more generally for most of my life, both because of it being a strong interest for me and because of the almost unavoidable atheism debates online in the 2000s. To be totally honest, leaning into Bayesianism has improved things considerably - it's pushed me away from "you have to be 100% certain about everything or you can't believe it" and toward "it's okay to lean into what you already believe, even if it isn't certain, and adjust in small increments as new evidence comes in".
What you suggested is certainly something I've thought of before, but at this point I'm already pretty rigid about caring about truth - I was a lot more imaginative and willing to believe anything when I was a little kid, but I don't think there's any putting that genie back in the bottle by now. I honestly envy people who are able to believe in a religion on faith without needing to question it.
It's a nice idea, but I don't think this is what I'm looking for. It is somewhat relevant, since many of the questions I have OCD issues about are sort of adjacent to religion (mostly related to the metaphysics of consciousness), but as I said, I don't accept the premise that religious beliefs are false to begin with - as mentioned, I have some beliefs that are quasi-religious (like non-physicalism about consciousness, leaning toward substance dualism) that, on a purely rational level, I actually *do* believe are true. The problem is that, on a more primal level, I have pathological OCD doubts that are causing me to be far less certain of myself than I know (on a rational level) I should be. Taking your approach would be giving up on the *actual* truth value of these beliefs, and I'm not willing to do that - that would just be giving in to the OCD doubts.
> If one says to himself “the love DSP is untrue, and scheduling it into my life is to endorse falseness. I won’t do it!”, we might admire his devotion. But we’d probably be right in describing him as foolish — he is subject to a biological speed limit of rationality, and to pretend that this limit does not exist is to accept a suboptimal DSP schedule.
I wonder how strictly this holds. Buddhist monks who just sat still until they died come to mind. Part of the canonical explanation is something like a complete suspension of DSPs. And, an even more speculative example, some people with psychopathy seem to be able to consciously micromanage DSPs.
"If one says to himself “the love DSP is untrue, and scheduling it into my life is to endorse falseness. I won’t do it!”..."
This sounds like the beginning of a Shakespearian comedy.
DSP=distorted state of perception.
Your hypothesis that one religious service a week is in the neighborhood of optimal, while having religion pervade one's life is at least hazardous has a counterexample-- orthodox Judaism. It can go bad, but a lot of people find that having a highly ritualized daily life is satisfying, and it clearly isn't incapacitating.
http://www.laffcon.org/
R. A. Lafferty was a very odd and striking sf author mostly writing in the 60s and 70s-- his work was a combination of sf, tall tales, Catholicism, and just plain weirdness. There's a free online event coming up on June 12. For details see the link.
The featured work is _Space Chantey_, a tall tale science fiction retelling of the Odyssey.
A typical short story by Lafferty, to the extent that such a thing exists. The human race is accelerated to the point that people can have two or three full careers in eight hours. When I read it in the late 60s, it just seemed whimsical. Now it's rather prescient. https://www.baen.com/Chapters/9781618249203/9781618249203___2.htm
Another thing which seems prescient is that he wrote a number of reputation dystopias, where people's reputations were very easily destroyed.
Lafferty group on Facebook: https://www.facebook.com/groups/eastoflaughter
I have always enjoyed Lafferty, and this line from the linked story made me laugh:
"“I will scatter a few nuts on the frosting,” said Maxwell, and he pushed the lever for that. This sifted handfuls of words like chthonic and heuristic and prozymeides through the thing so that nobody could doubt it was a work of philosophy."
Quick, what would be today's buzzwords so that "nobody could doubt it was a work of science" (social or otherwise?)
I don't have a handy answer to your question, but you'd probably like Lafferty's _Arrive at Easterwine_-- it's got Catholic theology *and* copious snark at tech startups.
I haven't read too many science papers, but "discursive" and "metastatize" always stick out to me as "nobody uses these" words. Maybe "deconstruct" too.
Question for people who understand the history of GWAS studies and the candidate gene approach: I understand that large scale GWAS studies have shown that many candidate genes are just false positives. As Scott showed in his famous 5-HTTLPR post, entire subfields of candidate gene studies have been built around false positives, only to be conclusively debunked by large GWAS studies.
What's weird to me though is that this seems to be the opposite of what I'd think would happen. All else equal, isn't a candidate gene study more responsible than a GWAS study? With a candidate gene study, you have a priori reasons to suspect a gene, which theoretically should reduce your opportunities to p-hack. In contrast, a GWAS study feels like it could be a fishing expedition across millions of nucleotides.
The reality of course was that it wasn't "all else equal". For some reason, the candidate gene people had a culture of p-hacking, whereas the GWAS people did not. For some reason, the candidate gene people had a culture of low sample sizes, whereas the GWAS study did not. And somehow, the candidate gene people never fully corrected for population stratification, whereas the GWAS people managed to find a way to do this.
My question is why? Is it just that geneticists are rigorous and psychiatrists are not? I feel like there must be a deeper reason that I'd like to understand.
Hello, geneticist here. I think the candidate gene approach is mostly a historic mistake. It made total sense to try to detect the effect of candidate genes with sample of a few hundred people when it was thought that most trait were influenced by a handful of genes. Furthermore, this is the kind of approach which worked very well to detect effects in plant and animal breeding programs (artificial populations produced by selected crosses can have genes with large effects segregating).
It took the on replication of candidate gene studies and many successful GWAS studies for us to understand that, with very few exceptions, most traits in humans are not influenced by a few genes with large effects but by many many genes with extremely small effects.
Thanks for the cogent explanation!
On the last politics allowed open thread there was a running argument about whether the left was uniquely more censorious than the right, cancel culture, etc. Here's some recent links that may provide interesting context
https://www.chronicle.com/article/a-university-suspended-diversity-courses-because-of-an-incident-that-almost-certainly-didnt-happen
https://www.theguardian.com/media/2021/may/21/associated-press-emily-wilder-fired-pro-palestinian-views
There's also a lot of good material on Jonathan Haidt's twitter account, though I'm too lazy to go hunting for anything specific.
This isn't the current right, but it was a revelation to me when I thought about how much the mere existence of homosexuality was written out of public life until the 60s or so.
On China and how the post pandemic economic recovery and ensuing political capital gains may not be sustainable.
Interesting both because of the importance of the long term trajectory of China to, well, everything. And what it might say about the shape of the post pandemic world more broadly if recovery leads to only a short boost in popularity for incumbent governments.
https://www.foreignaffairs.com/articles/china/2021-05-28/chinas-inconvenient-truth
Seems like the author is viewing China through a prism of a thoroughly American millenial point of view. As if the Chinese are just Americans with funny-shaped eyes and weird accents. Given the significant cultural differences history already tells us exist, the prognostication here strikes me as more than normally dubious.
What do you think is specifically wrong about the prediction?
For me to answer that question, *I* would have to have deep insight into Chinese culture, which I do not. However, I know that I don't know, and I know that trying to understand it via projecting the social fads currently sweeping the United States is almost certainly going to go badly wrong. What little experience I do have with other cultures -- meaning I have traveled and lived among them -- tells me that cultural differences are, if anything, stronger and more pervasive than they seem from a distance.
So I know enough to doubt the predictions a priori, without being able to offer anything better.
The fact that the (atrocious) treatment of non-Han minorities has any relevance whatsoever for majority-Han opinion of the government. Generally a significant underestimation of the popularity of the Chinese government in the Han population.
> Given the significant cultural differences history already tells us exist
On the other hand, when Chinese people are allowed to exist in a place free of Beijing's influence (e.g. Taiwan) then they pretty much wind up acting like Americans with funny-shaped eyes and weird accents. (An exaggeration of course, but there's nothing fundamental about Chinese people or culture that makes them any more inscrutably foreign than Japanese or Korean people.)
The convergence between the Taiwanese and Americans could be explained by the former living under the massive cultural influence of the later, rather than just being free of Communist rule.
My limited experience is that Taiwanese are quite different from mainland Chinese. I also rather suspect that mainland Chinese can no more be lumped together than, say, Knoxville County Tennesseans and Santa Clara County Californians.
The author, Elizabeth Economy, is a Senior Fellow at Stanford University’s Hoover Institution and Senior Fellow for China Studies at the Council on Foreign Relations. She has also lived in China for quite a number of years and published several well regarded books on Chinese politics. That doesn't make her infallible of course, but it seems of there's anyone you can give credence too it would be her
Well, as a skeptical empiricist, I pay a lot more attention to the consistency of an argument with what else I directly know than to the credentials of the person making it. You'll note I did not say I knew the author was wrong, I just said my doubt was a priori high. Now, had she begun the article with a solid exegesis on the *differences* between Chinese and American culture, and how those differences didn't matter for the theses she was about to explicate, I would have paid a lot more attention.
Something I'm curious about: it appears that the Sinovac vaccine has much lower efficacy than the Chinese government's claims; something like 55% vs the official claim of 90%.
Now, 55% is not sufficient to give herd immunity. On the other hand, I can't see the Chinese government admitting that its own vaccine sucks and going cap in hand to get two billion doses of the Pfizer.
So this seems to create a situation where everyone in China gets vaccinated with a 55%-effective vaccine which they are told to believe is 90% effective. Once foreigners start entering China again there's going to be a major epidemic among the already-vaccinated Chinese population.
Or am I missing something?
From what I've read, the Chinese vaccines are good enough that prevent you from dying or getting seriously from the Coronavirus. It's not as good as the mRNA vaccines, but they can be made with existing industrial production more readily and thus are better for deploying en mass to the third world.
You might be underestimating the complexity of the problem. It's extremely hard to predict the course of an epidemic, even when you have copious amounts of data, as should be exceedingly obvious by now. And that's without even considering how the genetic evolution of the virus might alter the future in what are even in principle unpredictable ways.
China now has very hard and fast lockdown policies. So even if the immunity rate is low, of detecting cases in an area means you put the whole area into "noone go outside ever" scale lockdown then you can keep outbreaks under control
Interesting reading, however it seems to me the author is leaving out China's historical experience with separatism, namely that it leads to warlordism and warring kingdoms, which makes great material for movies and novels but is pretty terrible to live through. The most recent such phase ended in 1928! Given that Chinese people (in my limited experience) have a lot more historical consciousness than Americans, that's got to be helpful to the Beijing government.
It is also slightly ironic that the Gini coefficient of the PRC, 0.48, which is supposed to sound the death knell of the CCP, is slightly lower than the Gini coefficient of the USA.
Anyone watch the Dominic Cummings testimony and have opinions?
I read a summary, I don't think it's going to have any impact, but I guess that just depends on how low people's expectations of government competence in the face of a pandemic were to start with.
I watched it all. I already suspected the big picture stuff: there was no alternative plan to "flatten the curve" in the early stages. The institutional failure seem pretty big and "trust the experts" would have been a disaster. We seem to be very lucky that the pandemic got up to speed so close to summer.
It's also interesting to hear the anecdotes about how dysfunction the government was. He tells it like an action story, with lots of heroes, villains and plot twists.
Some of the politicians interrogating him tries to score cheap political points but most of then were very professional. No-one really tried to defend Boris.
How much you should trust Cummings is debatable but I tend to like him. Hopefully evidence will get out that confirms his big claims.
Dominic Cummings didn't reveal anything that wasn't pretty obvious already and didn't bring any new evidence to the table. It is not going to change anyone's mind about anything. It could help the Tories kick Boris Johnson out of No. 10, if that's what they want to do anyway.
Whole thing is here for anyone who has 7 hours to spare. So far I got to a 2 hour mark and it´s fascinating, at least for nonBrits. Otherwise no opinion. https://www.youtube.com/watch?v=8LFS3FaRs_s
A correlates with B
B Correlates with C
A Anti Correlates with C
Is there a name for this type of relationship? It pops up all the time and probably is quite meaningful but I can't wrap my head around what exactly it means. None of the statistics textbooks I have can explain this case.
Similarly is there a name for the specificity-sample size tradeoff? For example let's say you are 40 years old and drive a ford f150 and you want to determine how dangerous it is to drive from San diego to Phoenix Arizona. You could look at accident rates for your car, for americans in general per mile driven, for the stretch of road you will be driving down, for people in your age group or 35-45 year olds driving ford f150's down that stretch of road over the past 5 years. Obviously the last category is the most relevant to you personally but the sample size would be tiny. The more you expand your sample size the lower quality the date you get is (the danger of driving a prius is very different from driving a truck, ect).
For the your first question, I don't know the name of it but I can think of a lot of examples. For example, being a Democrat anti-correlates with being a Republican, but both of them correlate with being American.
That relationship describes negative feedback loops.
I mean... I hope I'm not missing something here. Seems like an obvious way to conceive of a negative feedback loop. But when things seem obvious I start to second guess myself. A lot of biochemical reactions look like that. A buildup of A encourages the formation of B which encourages the formation of C, which inhibits A.
I call it a conundrum.
In some cases you might call B a confounder
https://en.wikipedia.org/wiki/Confounding
Also not sure of a name, but that is often just a symptom of condition B being underspecified. For instance, the example here of being a Democrat and being a Republican both being positively correlated with being American means "American" is too broad a category to be causally explanatory. If you narrowed it to being a rural American, a Californian, a South Dakotan, now it no longer correlates positively with both being a Democrat and a Republican, so you've actually captured a geographic trait relevant to the distinction.
There is a more specific way in which that American, Republican, Democrat categories are statistically pathological, though. Republicans and Democrats are both proper subsets of Americans, so there is a sense in which "American" represents the universal set of all elements that can be classified as Democrat or Republican. So be especially wary of that. Clearly all elements of a universal set are correlated with whatever defines that universal set, which makes the definition meaningless. This is obvious for "the" universal set. Nobody tries to say that any trait X correlates with existing, as that is clearly a spurious and meaningless correlation. All possible traits correlate with existing. But similarly, all possible traits that can only apply to Americans will correlate with being American, so that correlation is equally meaningless.
Any recommendations for free or cheap online resources to improve my writing skills ?
https://www.reddit.com/r/writing/comments/7h6qjm/any_good_writing_discord_servers/
Scott-style predictions for 2024:
Republicans take both House and Senate: 70%
A black populist Republican comes to prominence on a platform of opposing abortion and immigration, protecting black civil rights, pro-capitalism and anti-government sentiment: 50%
Herman Cain's twitter account runs for President: 95%
I mean, the Prominent Black Republican Leader is certainly a role that the Republican Party has been wanting to cast for a long time, but the problem is the lack of anyone to fill it.
There's been a parade of failed attempts to fill the role over the years. Ben Carson was smart but had some pretty weird ideas about pyramids, while Herman Cain had a bad history with the ladies. And neither of them had much of a stomach or taste for politics, they both thought they could skip the boring "Mayor" or "Representative" or "Senator" or "Governor" stages and jump straight to President, or at least maybe Vice President.
Then you've got Condi Rice who is well qualified but has no interest in trying to talk to regular people (and who fled public life in shame after the Iraq War), and Colin Powell who was never nearly as Republican as people wanted him to be.
I don't know why Carson didn't catch on, but I feel sure that saying something ill-informed wasn't it.
Tim Scott is next up. If you can bet on him at long odds for president in 2024, now's the time.
Interesting; I don't know much about him but I just started reading his wikipedia page.
sportsbet.com.au currently has him at 81-1
https://www.sportsbet.com.au/betting/politics/us-politics/us-presidential-election-2024-5479667
which is pretty long odds but somehow doesn't seem long enough to bother with. Other candidates at 81-1 include Amy Klobuchar, Jeb Bush and Kim Kardashian. (Rand Paul is 101-1, worse than Kim Kardashian, ouch!)
Interestingly, Joe Biden is at 7-1 and Kamala Harris at 4.5-1. It's tempting to stick a grand on each of them, and a couple of small bets on every plausible Republican, because these odds seem way mispriced at the moment.
I'm pretty sure Joe isn't running again. His whole schtick is that he's a transitional figure. That said, I don't think Kamala automatically gets the nomination, and she seems to consistently underperform her fundamentals in elections, even ones she wins.
I don't know who's next up on my side, but I think if Tim Scott can survive the nomination process he'll be a shoe-in for president and probably deliver a red trifecta. My only consolation will be just how much I'll enjoy watching Manchin eat crow when McConnell abolishes the filibuster and does . . . well, what it is that he wants. Probably something bad with taxes or social security, but I know I don't understand what those people want at all, so who knows.
Of course, as a blue partisan, I always think things are gloomy for my side. Probably the future will be neither as good as I hope nor as bad as I fear. That's generally the way to bet.
Lots of AI researchers have brought up the topic of Academic Fraud Lately:
https://jacobbuckman.com/2021-05-29-please-commit-more-blatant-academic-fraud/
An Ex Stanford prof brought the problem of gaming metrics and possible solutions (through new metrics): https://www.youtube.com/watch?v=jZZ2-eNW77o
This post by Yarvin too I think hits the nail in the coffin: https://www.unqualified-reservations.org/2007/07/my-navrozov-moments/
I suppose the situation is similar to what Scott said, a problem of Moloch: that a benevolent dictator can easily fix the academic science situation, but anyone within academic science will find it almost impossible to make any constructive changes. Based on this my prediction is that: Academic Science will over time keep getting worse, progress will come slowly and with more per capita effort and unless some deep structural reforms are done there will be no way to counter-act this trend. Any thoughts?
The private sector exists for pharmaceuticals and they're well aware of how often medical and biochemical research is wrong, but they use it anyway since you need to start your massive compound screens somewhere. I'm not sure how successful they are, some think the industry is about to fail from inefficiency but most people think it's just fine, and they certainly seem able to produce results when needed.
Even the private sector doesn’t have good incentives. Pharmaceuticals is a good example of that I suppose, lots of effort to discovering adjacent compounds that can be used to extend patents. Some times the private sector has shown excellence was in cases like Bell Labs, where there wasn’t any profit motive directing research. I think this is a problem governments will have to fix
I agree that this is an incentive problem. The research of a lot of chemists has made the industry very good at modifying and rapidly screening compounds, which is essential for synthesising drugs that work, but also really useful for slightly modifying existing ones. The second option is obviously cheaper since it takes much less work, so if "we" (regulators, insurers and consumers of medicine) are willing to pay them to do that, then obviously they're going to just do that.
It's not like Pharma is exclusively tweaking older drugs though, for example research into monoclonal antibodies has given Big Pharma some of their most profitable drugs, although that's mostly because they're so expensive. This does seem like a good thing for patients with cancer and arthritis, and as the costs go down and the technology improves (for example, requiring less frequent injections) Mabs could be used to treat more conditions.
https://en.wikipedia.org/wiki/List_of_largest_selling_pharmaceutical_products
There's also the impressive speed of vaccine development, although the innovation there may be more regulatory than technological, and obviously it wasn't entirely private.
Generally, I think I'd say that the industry and its research is very effective at optimising for what will make them the most money, and that's generally correlated with what will actually help patients. Sometimes they even do things that are less profitable in order to buy goodwill (most vaccine research), although goodwill is perhaps best thought of as a currency of its own that can be earnt and spent. I think the profit motive does work in our favour in the case of big pharma, although it should be recognised that this is only because of significant government subsidies, both directly (paying them to develop treatments for diseases that wouldn't otherwise be profitable) and indirectly (via the funding of the academic research that the for profit research builds on).
As always, I feel the need to acknowledge that I have worked for GSK and so I've definitely swallowed the "Big Pharma is a force for good" Kool-Aid that they serve to all the staff, but it should at least be reassuring to know that the majority of people there are primarily concerned with helping patients - much more than academia where people are primarily concerned with creating interesting graphs to put into Nature papers. It's a great example of how private and public researchers have very different incentives.
Bell Labs was only feasible when they had a monopoly on telephone service and could charge monopoly prices. Bell Labs basically went away when Bell was broken up. It's telling that none of the descendants have founded a similar lab since.
These all look very promising. Would you be willing to summarize something about the youtube piece?
I'm astonished to find that Yarvin can write something accessible and charming.
Well the youtube piece basically talks about the problems with H Index. First he notices that H-Index correlates with top scholarship till around 2005 (when it was invented), Einstein was third in H index till 2000s, then people gamed the rankings and Einstein fell to 600 and above. This is because physics papers have 100+ publishers, everyone cites each other and so on. So to mitigate that he introduces the CAP ranking, which is[ Citations - Co Authors - Publications ] > 0. Example if you have 8 papers in a 5 yr period such that C - A - P > 0 for 5 papers and < 0 for 3 papers then CAP = 5. Ranking people based on this criteria seems much better than h index, though he points out that he would prefer rankings that don't use citations, he's not sure how they would be designed
It's been quite a while since I posted Naval Gazing links here. So here's what I've been up to:
https://www.navalgazing.net/A-Brief-Overview-of-the-Chinese-Fleet - A look at the Navy the Chinese have been building, and why we should take them seriously.
https://www.navalgazing.net/Naval-Airships-Part-4, https://www.navalgazing.net/Naval-Airships-Part-5 and https://www.navalgazing.net/Naval-Airships-Part-6 - A discussion of the USN's lighter-than-air program from WWI to the mid-30s, with an 80% success rate at crashing rigid airships.
https://www.navalgazing.net/Battle-Stations - How ships are manned in combat
https://www.navalgazing.net/LCS-Part-1, https://www.navalgazing.net/LCS-Part-2 and https://www.navalgazing.net/LCS-Part-3 - A breakdown of the Littoral Combat Ship program, the biggest addition to the American fleet in recent years. While it's not technically the worst around, it's pretty bad.
https://www.navalgazing.net/The-Future-of-the-Aircraft-Carrier - What it says on the tin. A semi-sequel to my "Why The Carriers Aren't Doomed" series from a few years ago.
An interesting blog. Thank you for the introduction.
Thank you for doing this work, bean!
I wonder if there's a lesson to be learned from 4chan's experience with /pol/ and the more general problem it's had with every containment board? The problem is something like "whatever we try to contain actually just multiplies in containment, and then breaks containment and spills out".
Ofc in /pol/'s case it was particularly bad since the groups and people involved were deplatformed everywhere else and became a plague on 4chins, but I wonder if there's something more general there.
If a significant portion of your userbase frequents both the containment and non-contaiment boards/subreddits/discords, they will spill over into each other.
Still, is this actually a problem? 4chan's turbocharged racism, sexism and overall -ism works as a normie repellent and keeps it from turning into reddit, and most of the community uses it ironically.
One lesson is that, on any given discussion site, any "off-topic" forum dedicated to current affairs and politics will either take over the entire site, or at the very least, take over the attention of the moderating staff.
Of course, anybody could probably have learned this from watch the evolution of any internet forum over last few decades. Also, I am aware of the irony of making this post in an politics-allowed open thread.
Ancient wisdom: The net considers censorship to be damage and routes around it.
Though true, you quickly run up against the limits of the CAP theorum:
https://en.wikipedia.org/wiki/CAP_theorem
At this point, I think I have an approximate understanding of the CAP theorem, but I'll take another crack at it later to either understand it or figure out what I don't understand.
There's a hint that allowing for time makes the theorem weaker. What happens when there are active agents (people) trying to route around the censorhip?
Might the actual outcome be that censorship gets routed around, but the interface for doing so becomes something that most people aren't willing to use?
As far as I can recall, many containment boards actually worked. Ponies and pokemon, for instance, were successfully constrained. I'm not so sure the current state of things is about a failure of /pol/ etc. as a containment board, or the fact that our current lame discourse is so steeped in theatrical politics that it has become unavoidable. 4chan has always taken pride in contrarianism. As "woke" issues become more prominent in public discourse, 4chan will respond with a proportionately contrarian reaction.
There's definitely a brutal selection effect here, where by definition containment boards that are noticed are almost by definition ones that have failed. My experience has been that the probability of the strategy working in a specific instance is decent, but it's definitely a tool that shapes a community at the cost of sustained effort by moderators.
Ponies and pokemon were internal affairs. Some wanted to talk about them, some got sick of seeing them, both had nowhere else to go, conflict arose, new boards were created, people got segregated and things calmed down.
/b/ and /pol/ were something else, a product of outside hype resulting in millions of outsiders flooding the site in hope of participating in whatever it was currently advertised as. At that point, the boards were sacrificed to newcomers in hope they'll stay there.
Calling both of those "containment boards" is technically correct, but they're two completely different kinds of "containment". /mlp/ is a ghetto. /pol/ is a floodplain.
I want to emphasize that, to a certain extent, floodplains work. Topical boards survived - someone who imagines /a/ or /v/ as "/pol/ with anime/videogames" is mistaken (the times of /b/'s domination were much worse, at the very least). They get to defend their customs and culture, and naive newcomers who don't leave their containment board behavior and topics at the door are warded off. Still, those less-naive newcomers who learn to respect local customs will still largely come from the flood, and still get to retain their beliefs that made them join the flood in the first place, and there's nothing that can be done about that (at least when you're 4chan and, by design, can't effectively ban people).
To sum up, the lesson of 4chan is "don't let others define who you are".
For any ACX readers inhabiting or visiting Texas, the Austin LessWrong and ACX group is actively meeting in-person and will be having a dedicated far-comers meetup Saturday, June 5th.
Far-comer meetups are dedicated time for people who live too far to attend regularly, so come meet people from all over Texas. We've also had more people joining the community lately with people moving to Austin from around the country.
https://www.lesswrong.com/events/TAmntujiYXccKTKKD/texas-far-comers-meetup-in-austin
Is there a moral obligation to have your act together enough so that you can be reliable? Is there a good way to frame this so as not to excessively blame people with executive dysfunction?
Not a direct answer, but I'm reminded of https://meltingasphalt.com/personhood-a-game-for-two-or-more-players/
> So it's often said, "If you want to be treated like a lady, you have to act like a lady." And similarly, "If you want to be treated like a person, you have to act like a person." That's the idea I'm trying to get at.
For a very recent example, one of my friends wanted to make a remote appointment, and the receptionist pointed her at a doctor who doesn't do remote appointments.
A receptionist seemingly has a professional obligation to route calls in a way that pairs caller with receiver such that they can actually work together and meet each other's need. Doesn't require a moral obligation to expect it.
Shouldn't there be a relation between professional and moral obligations?
You can try to say a person who chooses to work in some professional has a moral obligation to uphold the professional obligations of that profession. I'm just not sure you need to. The point of morality seems to be compelling correct behavior in cases in which there is otherwise no compelling factor. Be good just for the sake of it. Professional obligations don't need to be internally compelled by one's conscience to uphold them. The threat of job loss, possibly even losing a license when the profession is a real profession with standards boards and licensing, is more than enough.
Stoicism has a shit-ton to say about this topic in particular. In short, you should have your act together because it is a virtue and having virtues is good for you (as well as those in your community). You should not blame other for their *actions* because you do not and can never know their full context and frame of mind. You should encourage them, and teach them if they are receptive, but only acknowledge and not blame them for things that *you* perceive as shortcomings.
There's only a paradox to the extent that your moral system assumes that all people have equal inherent moral worth.
A utilitarian, for example, shouldn't have difficulty with the idea that some people are inherently morally laudable or condemnable. Likewise, historically virtue ethicists outside the Christian tradition have understood that you can be born with a more or less virtuous character the same way that a horse foal can be born with more or less potential for the particular virtues of horses (speed, stamina etc.).
I don't personally see anything contradictory about saying that being less able to meet one's obligations makes one a worse person. If anything, I find it hopeful as it is easier in my personal experience to medicate this sort of moral defect than it is to fix other comparable bad habits with no clear neurological basis.
Doesn't Aristotle himself call the people who by nature are worse at taking care of themselves and others natural slaves in the Politics?
I think there's an obligation not to be negligent. Like if you run into a kid on the road in your car because you were texting and driving, even if you didn't mean to, it's still blameworthy. When you drive cars a certain level of paying attention is demanded.
Perhaps another way of putting this is that willing an action requires willing a certain amount of mindfulness, since a certain amount of paying attention is required to successfully perform the action.
The difference with actions from malintent here I think is that actions from malintent are necessarily blameworthy -- it doesn't even matter if you succeeded in running over the kid in cold blood, there's no possible way going out of your way to hit them is good -- whereas actions from inattentiveness are only potentially blameworthy -- if you couldn't swerve out of the way because the kid darted out in front of your car in a split second, it's possible you COULD have reacted differently, but only with superhuman attentiveness and skill. Then again, it's also possible that you shouldn't have been texting and driving in a school zone.
I was thinking more in terms of executive function and (vs.?) making and keeping promises. Or the moral issues with large important institutions which just aren't reliable.
I think the theory works fine for promise keeping. You should never break a promise with malintent (why make the promise in the first place), but not all "lapses" are necessarily bad; their badness depends on the context.
Involved in the context I think is the nature/constitution of the agent. If you naturally forget things more easily, perhaps you should be cut a bit more slack before you are blamed. But there will still be lapses of attention -- e.g. the car accident -- for which anyone should be blamed/which we should judge as wrong, regardless of their circumstances.
This I thought was clear from the notion that successful action requires a certain level of reliability/attentiveness, such that one can't possibly act well without a certain level of mindfulness.
Does this address your concern more directly or did I miss again?
And today, in "People I have just learned are Catholic(ish) and kind of wish they weren't", it's the turn of Boris Johnson. Who is, or was, or maybe, or it's probably his girlfriend-now-wife. https://www.theguardian.com/politics/2021/may/30/boris-johnson-carrie-symmonds-married-catcholic-church
"Symonds, who will be taking Johnson’s name, has spoken publicly of her Catholic faith, while Johnson was baptised into Catholicism but renounced it for Anglicanism during his Eton schooldays, according to biographers."
Because of course he did. Something he has in common with Maggie Thatcher, who ditched Methodism because CoE is more respectable and mainstream if you're a Tory.
I was completely unaware of any Catholic connections with Boris until now. I have never seen it mentioned, not even if/when talking about Tony Blair (who did convert). This raises a ton of questions: did he formally defect? (probably not, so technically still one of ours, like Mike Pence and Neil Gorsuch in an American context).
Given that adultery and numerous affairs and illegitimate children and (allegedly) abortion for one previous girlfriend don't seem to have troubled Boris, why now the church marriage? Well, probably Carrie. Which also raises why bother now, given that she had no problem having an affair with a married man, contributing to the cause of his divorce, and having a baby out of wedlock with him? Again, probably down to "the parents would like you to get married in church".
This is what you call cultural Catholicism, I can't even call it cafeteria Catholicism as there seems not to be any "picking and choosing which doctrines I follow", but rather "doctrines? what that?" going on. I'm one of those who would agree that it's unacceptable for the likes of this to happen - Boris and Carrie get a ceremony in Westminster Abbey (because it's quaint, I suppose?) while ordinary Catholics cannot get remarried in church after divorce - and it's not because I support second church marriages for the divorced, but the blatant favouritism, string-pulling, and 'one law for the rich and another for the poor' going on. I'm sure it's very nice they have a family friend willing to be their personal priest who has this kind of pull, but while I agree with Pope Francis that pastoral care of the fallen-away is better carried out with delicacy than hitting them over the head with a brick, there's not much chance here of bringing Boris or Carrie back to the full practice of their faith.
It's not entirely impossible, but it will be a miracle.
Doesn't Boris know that the proper way for the bloke who runs the country to get remarried is to start a new church with himself at the helm, thereby plunging the British Isles into 120 years of sectarian conflict? It probably helps that none of his exes' nephews have ever sacked Rome.
What do you think the going rate of a plenary indulgence is these days?
I've given up on taste.
I have the nagging suspicion that the unhealthiness of the modern diet is not just a side effect of capitalism-driven taste hacking; that instead the enhanced taste *itself* is a causal factor. I'm now sticking to unprocessed, unseasoned foods.
Why unseasoned? Humans have a long history of using herbs and spices.
My unchecked assumption is that spice use was sparse: Only few people used them, not nearly in modern amounts, and only for a few thousand years.
My unchecked assumption is that people who lived in places where spices grow would use that subset.
Fair. Perhaps it's that I'm thinking about salt specifically.
re: previous question; There's also the spooky trend inversely linking calorie intake and longevity. I wanna eat less.
People certainly used salt. People need salt.
Now that you've mentioned it, I have no idea how much salt people used. Availability would have been a constraint, but did people who had access to a lot of salt have used a lot more than they needed?
Salt is one of the only minerals that we can taste and that tastes good (unlike say K, which you can taste but that's so you avoid too much of it). Asceticism is fine and dandy, but it doesn't require you to get dehydrated to do it (which is what will happen if you cut too much salt out of your diet).
I'll probably consume salt in some form, just not distort my appetite by salting everything I eat.
Yeah. Taste/flavor is a factor brought up in The Hungry Brain which Scott reviewed here: https://slatestarcodex.com/2017/04/25/book-review-the-hungry-brain/ It is not the only factor, of course, but it _is_ a factor.
My guess: getting rid of processed stuff while be a larger factor than getting rid of seasoning.
There's definitely habituation. Try a 72-hour water fast. You will be *amazed* at how rich the taste of even "boring" food becomes afterward. Doesn't last, though.
I suppose I'm an outlier in this, but in general I find every additional ingredient tends to reduce my enjoyment of a meal. Rarely does something taste as good as just some chicken and broccolini or peas thrown in a pan. Even comparing sandwiches that I make myself to ones I could buy in a deli I'm always shocked at how much the herbs and sauces they add reduces the experience. When I've been living alone it's been trivially easy to eat healthily, but no one else can accept the food that I make myself.
Perhaps this group could use some funny animal stories. Or they could be viewed as evidence that we're no better at testing the top end of animal intelligence than we are at testing the top end of human intelligence.
https://gallusrostromegalus.tumblr.com/post/618966214055804928/yes-this-is-the-terrible-shenanigans-dog-since
A stunningly intelligent herding dog takes control of a lawn roomba. I strongly recommend the other links in this post-- the dog's shenanigans, the dog teaching another dog, the war with the fox.....
That is wonderful, thank you. And yes, sheepdogs are the smartest dogs. But I'm a little bit surprised that one Sheep Simulator(TM) was enough to activate the herding instincts; I thought that usually took three.
Do people count as sheep? A friend told me about a border collie which was only content when the whole family was seated at the dining table so that they could all be kept track of.
Yes. I've been at a house where the dog would try to herd people when there were several folks visiting.
My family had a german shepherd, growing up, and she definitely did this. Anytime we were all walking together, she would circle the group and (nicely) head off any strays.
Dog instincts are baffling to me. My in-laws have a beagle they raised from a pup, and that dog is spoiled pretty rotten. Basically a lap dog, spends most of her time indoors on the couch. Never been within a mile of a hunt a day in her life. Treated squirrels and things the same as any dog, some barking, a little chasing. Nothing major.
But Beagles were bred to hunt rabbits. And when she was about 4 years old I was taking her on a walk and we came across a rabbit. I have good reason to believe it was the first rabbit she had ever seen.
And she. Went. Nuts.
She was after that rabbit like a rocket powered summer sausage. The rabbit got away, naturally, but she was obsesses with sniffing out it's hiding place. It took a good ten minutes before I could, with difficulty, pry her away from the site. She'd never reacted that way to an animal before.
How the heck does a Beagle know that it's meant to hunt rabbits? How does it know what a rabbit is enough to distinguish it from a squirrel or cat? Did they just breed together all the most rabbit obsessed dogs they could find? Where is the rabbit recognition coded into their DNA, and how exactly does that work?
Dogs, man.
Who are the really fascinating people you follow, who are not related to each other? To keep an open-mind and a broad perspective on things?
My list:
Scott
Derek Sivers
Taleb
But I'm looking for more and more different people
Scott Locklin, Niccolo Soldado, Curtis Yarvin
Fantastic, Locklin is my fav so far
Arnold Kling, the Smarter by Science people, and the Football Outsiders people.
Great, thank you
Do they need to address current events directly? Otherwise (as Scott's Arabian Nights review shows) I think the best bang for your increasing-open-mindedness buck is to read old books. E.g. Homer/Pascal/Confucius >>> for range and depth than practically anyone alive today.
Fascinating, yeah Arabian Nights was really something. That's a good idea, I think even John Stuart Mill etc should be good
Anything from this list http://www.sonic.net/~rteeter/grtfad4.html + the book "How to Read a Book" by Mortimer Adler is a great start.
Great, thanks
Can anyone recommend a good provider for pharmacogenetic testing, specifically applying to psychiatric medications (SSRIs, anxiolytics, etc.)? A family member is having issues with her medication and we're hoping such testing might identify better candidate therapies.
I used https://www.genelex.com/ a couple of years ago; the price was very reasonable and the report seemed *very* complete.
I ordered the test because no medical provider I've ever encountered believed me when I told them I have absolutely zero pain relief from opiates. Their disbelief made recovering from surgeries absolutely excruciating. I strongly suspected I had the CYP2D6 gene mutation that prevents opiates from being usefully metabolized, and it turned out I was right.
Having the laboratory paperwork made all the difference in the world with how doctors reacted to me. I went from being treated like a drug-seeker and/or ignorant hippy to a fully legitimate partner in the healthcare process. Suddenly it was important to have the testing results on my chart, and my primary care doc even had the in-house pharmacist do some research so we could be prepared with pain management alternatives in the event of a dire emergency or upcoming surgery.
As a bonus, the test also flagged a mutation that puts me at high-ish risk of having a dangerous adverse reaction to estrogen-based medications. I had no idea, but now I'm obviously going to avoid estrogen-based birth control and estrogen replacement therapies.
Your family member's mileage may vary due to the condition you're looking at, but I can say I had a very positive experience with Genelex!
Thank you very much!
Could tracking suicidal ideation/suicidality be helpful? Or might it risk drawing undue attention to suicidal thoughts - and making the problem worse? Sometimes, I feel like it could help to keep an eye on this metric for the purposes of being sure to intervene before the problem gets too bad, and to see what positively/negatively impacts it.
Request for thoughts on judging covid risk, specifically:
- Any opinions on microcovid.org
- Thoughts on converting microCOVIDs to micromorts
I like the idea behind microcovid.org a lot; no opinion on how sensible the modelling is but I haven't noticed anything obviously crazy. But I feel like for youngish people in countries with a large chunk of the population vaccinated, the risk budgets are pretty conservative.
Suppose you are a 25 year old with an IFR of 0.01%. At this point, when your chances of infecting a vulnerable person are pretty low, it seems fair to assume that at least 10% of the badness of covid comes from the risk of death. So say 1 micromort = 1000 microcovids. Then the "standard risk budget" of 1% risk of covid per year works out to 10 micromorts. That seems pretty low indeed to me, compared to e.g. the risk an average American takes by driving, or the Value of a Statistical Life used by governments (https://en.wikipedia.org/wiki/Micromort). But maybe I'm thinking about it wrong, suggestions welcome.
Recommendation request for the tabletop nerds out there. Our group's last campaign ended with them ascending to join the pantheon of gods and we're looking to do a followup campaign in a system that supports playing gods. Not in the "super awesome level 99 badass" sense but an emphasis on guiding mortals, remote action, nationbuilding/politicking, and everything one imagines an interventionist-but-not-physically-manifested deity would get up to.
This is a niche thing so I'm not expecting to find a perfect system, but I feel like there has to be some medieval royalty-simulator game or something that has a mechanical focus on the kinds of things we're looking to do.
I've never played it, but maybe Amber Diceless RPG? As I understand it there's a ton of implied setting, since it's an obscure RPG based on an even more obscure series of fantasy novels, but the concept of being gods responsible for abstract concepts / platonic ideals dealing with complex interdimensional politics sort of fits.
There's also Exalted, which is theoretically what you want although as I understand it a typical game is more like Dragon Ball Z. Kind of like how World of Darkness is more Blade than gothic horror in practice.
I've played Amber, and I don't think it fits the bill. The protagonists are very powerful, but they're still physical people interacting directly with the world.
Are the Amber books that obscure? It seems to me that a lot of people still remember them. On the other hand, a lot of people were born after the series ended, and there's no movie or television show, so maybe they're sort of obscure.
For the record, I think the first book was very strong, and the series started going downhill with the second book, and went downhill fast after that. The common opinion seems to be that the first five are good, but the second five aren't so good. There are people who like the whole series.
I've heard that someone asked Zelazny why he was writing those potboilers, and he pointed at his children who were playing outside. He said he needed to have [some small number] of books in print for each person he was supporting.
Amber's not obscure! They're the most accessible intro-level Zelazny. They're certainly not (IMO) his best, but... well, I think there was a major quality break between the first and second series, and even the first was descending into chaos pretty badly by the end - Zelazny was really better at plotting shorter pieces - and you're probably right about the first being the best, but potboilers still seems rather harsh.
Then again, I have a soft spot for Zelazny in general. Let's see... I'd say he's less well known than Bujold, better known than Cherryh? I'd expect anyone really into science fiction to know (of) him, but not anyone who just knows a few greats. Your mileage may vary on whether that's obscure.
Personally, I recommend A Night in the Lonesome October, Lord of Light, or any of the short story collections.
You might want to look up Nobilis. There isn't an emphasis on politics IIRC, but you're playing the embodiment of a concept so you have godlike powers.
I would see if you can find an answer through Tumblr user Prokopetz. He's a game designer who frequently answers such questions. You could send an ask or just dig through his old answers until you find someone who asked a similar question.
There is an interesting discussion among mathematicians about improving peer review, common and hidden knowledge, and anonymity: https://mathoverflow.net/questions/394101/peer-review-2-0
I’ve got this perverse desire to say something just transgressive enough to earn a one day symbolic ban from ACX. It’s kind of like the time I got a behind the scenes tour of the big cats area of the zoo. If I had really wanted to I could have reached through the bars and touched a tiger’s flank. Am I the only one who gets this sort of urge?
Okay let’s give this a whirl.
Hey Scott, Thomas Bayes wore combat boots!
My transgressive comment is that Data Secret Lox is full of Trumpists and, even worse, the Leftists who suck their cocks and tone police the other leftists who might have something different to say. They banned me and that's fine. I asked them for a lifetime ban and they wouldn't give it to me, but fuck them, they suck Trump's cock.
Well, obviously I was just going for a laugh. But now that you mention it there is an unexplainable - to me, at least - reluctance to acknowledge the obvious fact that Trump is a moral cretin in the comments here.
I believe that Trump University (a scam aimed at very vulnerable people) was enough to disqualify Trump on moral grounds, but people don't seem to agree with me.
I also think that him hanging around the Miss Universe dressing rooms when he knew the young women didn't like it is a more serious matter than that "grab them by the pussy" remark, but apparently I'm extremely weird.
I believe his refusal to acknowledge he lost the election is his most egregious action, but I agree with your points as well.
Not only that, but an insistence that he could only have lost the election if the other side cheated-- made before the election, so there was no evidence.
I’m still stunned that so many Americans think cruelty and ignorance are acceptable qualities in a president.
An orthodox Trumpist from the old site, who is over on DSL now, says it this way, quoting Lincoln on Grant. "I cannot spare this man. He fights."
It's a pretty common view on the right that the left is out to destroy them. If you believe that, you're going to be willing to make accommodations with people you probably otherwise wouldn't.
I’m new in town. What is DSL?
It's obvious and so uninteresting. Scott had a post a long time ago pointing out that controversies that get a lot of attention are always ones where it is not obvious which side is right, so there are partisans arguing both sides.
The nub of the controversy is that each side thinks it is obvious that their side is right. I don’t think it’s uninteresting that there is not a shared reality though. That is pretty frightening.
I don't think that's David's point. I think the point is that even most of his fans don't defend Trump's character. The idea that he's a moral cretin isn't controversial because it's widely shared, even by many (most?) of his voters and fans.
I think I might get his now. I’m relatively new here and this has already been done to death.
If I’m completely honest though I had never really considered the possibility that many Trump fans are aware his behavior is terrible and are still okay with it. I guess I’m terribly naive but that idea is really a gut punch for me. More naive thinking: But that’s bad!
ACX Miami Meetup this Thursday (6/3) @ 2:30pm
Location: Pasion del Cielo coffee in Brickell City Center
701 S Miami Ave Unit B350, Miami, FL 33130
Facebook event: https://www.facebook.com/events/1390917134609778
I'm starting to get the impression that if party leaders of the Democrats or Republicans came out and said there was a giant hurricane coming, get to shelter now, a large amount of the non-leadership members of the other party would not believe them, and if they even got up to look out the window it would be solely motivated by trying to prove the other wrong.
This is not entirely because each simply distrusts the other for being in another party, but also as a result of each one frequently exaggerating a problem, or otherwise caught lying about a particular thing, and the habit of each echo chamber's favorite past time being to emphasize any instance of this from the other. That's both a problem of hyper focusing of an echo chamber on tearing down the other, and a problem of too frequently using and even defending absurd hyperbole's within each group. Poe's law comes into play, but maybe when people start calling people out years later for saying the world won't end if [x] gets elected (real example) when the world, in fact, did not end goes a bit beyond Poe's Law.
The glimmer of hope here is that at least the plurality of the U.S. is not aligned to either major party, as best as polls can tell, and I don't usually get the impression that other distantly separated parties (Green vs Libertarian, for instance) have this extreme of a problem with each other.
Um...
As a Libertarian, if a Green Party member tells me a catastrophe is likely to occur, and I know nothing about said person other than party membership, my priors on the catastrophe do not change at all.
I think you may be a little optimistic with your final conclusion!
(Mind, if I do know more about the person, especially if they have domain-relevant expertise, that doesn't hold. And note that at the very beginning of the pandemic, in California at least (where I was looking, at least), we didn't get the nasty factionalism - that came in later - despite our government being pretty much straight Democrat. So I don't think it's actually quite as bad as you're describing, but I also don't think the minor parties are an exception.)
I may be missing something here, but your description of your own affiliation and presumed reaction seems to support my point, rather than contradict it. The way you wrote it though sounds like you meant it to contradict?
I think the span of time specifically between the beginning of the pandemic and now has seen significant heightening of this sort of divide and distrust, and the various reactions to it I think have contributed a lot to each sides disregard of the other. That said, I wasn't in California, but for whatever reason a lot of my more republican contacts were specifically in California, and they seemed to jump on distrust of anything about the Pandemic harder than most others. That much could have just been my own exposure bias, and there were a few there that at least didn't outright deny everything as simply a lie very quickly. The severity of the distrust on that topic did seem to correlate more with just how supportive they were of Trump specifically (so the republicans that didn't like Trump much anyway seemed less likely to call it a hoax, and only seemed to care about certain government actions in the matter). I think even they got more distrustful since that time though of any Democrats. That much could just be my own exposure bias though, I did not conduct substantial polls throughout that time.
Anyone have insight on (1) the extent to which inhaled glucocorticoids (specifically Symbicort) impact the effectiveness of the mRNA COVID vaccines (specifically Pfizer)?
From 20 min of Googling all I could find is the following:
The AAAAI emphatically says, "No, there is no impact on an individual’s ability to respond to the vaccine and control of asthma is essential! There is no data to suggest that inhaled corticosteroids and/or leukotriene receptor antagonists impact on immunogenicity of the mRNA COVID-19 vaccines."
(They seem to avoid the error of assuming that no evidence = no impact: they use different language when discussing another question on which "no information could be found and more information is needed" suggesting that they are not simply saying, "oh there are no studies on this so it's not a problem." They also seem to be speaking pretty specifically about Symbicort in that answer, since later they say "Daily oral steroids may interfere with the antibody response to the vaccine based on data with other immunosuppressives and flu vaccine.") (https://education.aaaai.org/resources-for-a-i-clinicians/vaccines-qa_COVID-19)
However, drugs.com says there are moderate interactions between Symbicort and the Pfizer COVID vaccine:
"If you are currently being treated or have recently been treated with budesonide, you should let your doctor know before receiving SARS-CoV-2 (COVID-19) mRNA BNT-162b2 vaccine. Depending on the dose and length of time you have been on budesonide, you may have a reduced response to the vaccine. In some situations, your doctor may want to delay vaccination to give your body time to recover from the effects of budesonide therapy. If you have recently been vaccinated with SARS-CoV-2 (COVID-19) mRNA BNT-162b2 vaccine, your doctor may choose to postpone treatment with budesonide for a couple of weeks or more"
... For SARS-CoV-2 (COVID-19) vaccines, vaccination should preferably be completed at least two weeks before initiation of immunosuppressive therapies; however, decisions to delay immunosuppressive therapy to complete COVID-19 vaccination should consider the individual's risks related to their underlying condition. Vaccines may generally be administered to patients receiving corticosteroids as replacement therapy (e.g., for Addison's disease)." (https://www.drugs.com/interactions-check.php?drug_list=432-2530,4221-19642)
Not sure if I'm violating any copyright issue here but...
From the linked article:
"In 1947, having left Nazi-occupied Vienna for the quaint idyll of Princeton, N.J., seven years before, the mathematician Kurt Gödel was studying for his citizenship exam and became preoccupied with the mechanisms of American government. A worried friend recalled Gödel talking about “some inner contradictions” in the Constitution that would make it legally possible “for somebody to become a dictator and set up a fascist regime.” Gödel started to bring this up at his actual examination, telling the judge that the United States could become a dictatorship — “I can prove it!” — before his friends (one of whom was Albert Einstein) managed to shut him up so that the naturalization process could go on as planned."
https://www.nytimes.com/2021/06/02/books/review-journey-edge-of-reason-kurt-godel-biography-stephen-budiansky.html
Review of “Journey to the End of Reason” Kurt Godel bio
So, I'm considering getting an electrotrichogenesis (ETG) treatment for my incipient boldness (I'm vain. I know). The people at the ETG center claim it is super effective, and judging from the before-after pictures they showed be, it seems to be the case (assuming that the pictures are true). They also cited some very impressive statistics like: 96.7% exhibit extra-hair growth, and the average hair count increased by 66.7%, as compared to 25.6% in the control group. So far, so good.
When I went to check the literature for myself, I found that the results they (and every other ETG clinic) cite come from this one paper on the International Journal of Dermatology (see link at the end), which is from 1990 and has treatment group of 30 people. The reported effect sizes are so large that you should be able to capture them in such a small sample, but still the fact that the sample is so small and that there has been no follow up research makes me a bit weary of the effectiveness of the treatment. (There is a follow up paper in 1992 that uses the same subjects as the 1990 paper and extends the treatment from 30+ weeks to 70, showing further gains in hair density, girth, etc).
According to Wikipedia, it has been approved for use in Europe, Canada and Australia, however I am not sure if this means that it's proved to be effective be the corresponding authorities, or it just doesn't kill you.
Anyway. I'm curious to hear your thoughts and experiences (if you have any) on this.
Here's the link to the 1990 paper.
https://pubmed.ncbi.nlm.nih.gov/2397975/
Wow! Wouldn't it be better to become bold?
Okay that was a cheap joke built an uneditable typo. Don't know anything about the procedure but Godspeed Formicad Cigarros!