Isn't it a problem insofar as everyone who invests with A16Z has the option of putting their money in the S&P instead, and unless they expect A16Z to beat the S&P, they'll just do that? (unless they're looking for something countercyclical which, again, I don't think tech is)
You can trivially beat the S&P in an up year by buying the S&P with 2x leverage (or 3x, or whatever). Or just buy the magnificent 7 and ignore the other 493 companies. But then when there's a tech downturn, you get absolutely slaughtered.
What sophisticated investors want is good returns year in and year out, which means buying the S&P and also buying other good investments that are uncorrelated with the S&P.
I don't know if A16Z is actually uncorrelated with the S&P because the S&P is so tech-heavy and there are probably correlations between big and small tech companies. But "did it beat the S&P" is not the main benchmark that most A16Z investors are using.
It's a little weird to me to point to VC as an example of "good returns year in and year out", at least its reputation is that it invests in companies that either strike it big or go out belly up. If you'd want "good returns year in and year out" (which I don't necessarily care about), invest in bonds or something.
> at least its reputation is that it invests in companies that either strike it big or go out belly up. If you'd want "good returns year in and year out" (which I don't necessarily care about), invest in bonds or something.
Yes, but the theory is that these bumpy returns are uncorrelated enough with the other asset classes you own, so that your overall returns get smoothed out (and are higher) over time.
"The way you get consistently good returns is to invest in lots of uncorrelated things that all individually do well on average."
To me startups sound, if anything, like the opposite of this. Not uncorrelated, don't necessarily do well individually on average.
"Also bonds have capped upside which is not great if you're ok taking on more risk to get higher returns."
The whole idea of trying to get consistently good returns is presumably to accept lower overall returns in exchange for lower variance. If you're going to not do something because it's lower returns overall, fine, but then you're back in territory of more risk and bad return in some years.
> To me startups sound, if anything, like the opposite of this.
US startups as an asset class are probably uncorrelated with other asset classes like large cap stocks (i.e., S&P 500) or REITs or private credit. So if you're a rich person or university endowment holding REITs and private credit and large cap stocks, it could be very reasonable to add A16Z as a way to diversify.
> accept lower overall returns in exchange for lower variance
The goal of diversification is to lower variance *without* compromising risk-adjusted returns. If you expect three uncorrelated assets to each return 8% over the long term, you get a much lower variance by investing in all of them equally than by going all in on one of them.
There's lots of fancy math to it, but basically if you know the expected return and variance of a bunch of assets, and your own risk tolerance, you can plug that into a formula to tell you how much to invest in each asset: https://en.wikipedia.org/wiki/Modern_portfolio_theory
1) S&P500 had a very strong performance over this period, much stronger than the vast majority of ex-ante expectations. Index returns are generally hard to predict. This means that the update from A16Z underperformance over a period is not very large.
2) Almost everyone who invested in A16Z already has a much bigger position in S&P500. The decision they had to make was whether a marginal dollar of their investment should be allocated in A16Z. This may have been optimal even if A16Z was ex-ante expected to underperform S&P500. For example, if an investor thinks that A16Z is less than 100% correlated with S&P500, and is expected to be lagging, an optimal portfolio might still include an allocation to A16Z for diversification reasons. This portfolio would be expected to have a lower return than S&P500 but higher expected return per unit of expected risk.
About the revised Monty Hall problem. Prior to choosing or any doors being opened, you know there will be a 2/3 chance Monty has the prize you really want (the special goat). He opens a door and shows you a prize you don’t want (the non-special goat). He still has a 2/3 chance of having the prize you really want. Switch. Maybe I am not understanding what this variant is supposed to add to the original problem.
Even with the original one, if he intentionally reveals a door you don't want and you didn't select, you get some information about the door you didn't select, but none about the door you selected, while if he just randomly opens a door and it happens to be one you don't want and didn't select, then you learned something about the door you selected.
This one changes things yet again - he's trying to avoid one particular door, but it turns out that's not the door you care about, so you learn something different about each of the doors than in either of the other variants.
> He still has a 2/3 chance of having the prize you really want.
I think this is where intuition fails. Before he opens the door, you're right, there's a 2/3 chance you don't have the special goat.
But after he opens the door, you've learned something - that it's not possible you have the normal goat! And that 2/3 chance of you not having the special goat was predicated on there being a 1/3 chance that you **could** have the normal goat! So it's no longer true.
And more, since there's a 100% chance he opens the normal goat door if you have a special goat, and a 50% chance he opens the normal goat door if you have the car, you've learned even more! You've learned it's more likely that you have the special goat, since it's less likely he would have opened the normal goat door if you have the car, and more likely if you have the special goat.
Here's my solution to the revised monty hall problem, from tabulating all the options. Just to explain the notation, assume without loss of generality that you always select door 1 and the host then opens either door 2 or 3.
Let's say the true situation is:
c g g
That is, door 1 has a car and doors 2 & 3 have goats. After the host reveals a door, the situation becomes:
c g g -> c g
That is, the result of staying is car and the result of switching is goat. Let's warm up with the original monty hall problem. In this case, the 6 possibilities are:
c g g - > c g
c g g - > c g
g c g -> g c
g g c -> g c
g c g -> g c
g g c -> g c
Perusing the right hand side, if you stay 2 out of the 6 options give you a car, but if you switch 4 out of the 6 options give you a car.
Now switching to the revised monty hall problem, where g* is the desired goat. The 6 options are:
c g g* - > c g*
c g* g - > c g*
g* c g -> g* c
g* g c -> g* c
g c g* -> NA
g g* c -> NA
But notice that options 5 and 6 are not possible by the problem setup. The host will not reveal the car (because they are playing original monty hall), but you know that they also did not reveal g*. Out of the remaining 4 options, 2 of them have the desired goat behind the original door and 2 of them have it behind the alternate door. It's 50-50.
Your mistake is in not considering the possibilities where the host reveals g*. The problem states "_host is unaware of this_". In the c g g* case, both c g* and c g are equally likely - these two worlds should get half the probability as g* c g -> g* c world (host has no choice).
Does that mean that if I take the normal Monty Hall problem and simply paint the goats different colors so you know which one Monty opened -- then I would also be indifferent between switching and not switching?
Yes! Because described situation is different - the goat X was shown instead of one of the goats was shown. This removes possibility of you selecting goat X, so 1/3 of possible cases are removed.
EDIT: more than 1/3 cases are removed:
- you selecting bad goat (1/3)
- Monty selecting good goat after you selected the prize (1/6)
It feels wrong and unintuitive that the optimal strategy changes because of an arbitrary difference that doesn't seem to affect anything physically. Can you describe it in a way that makes it intuitive?
The change is that you and the host no longer care about the same thing. The host is still making decisions based on the car, but now you're making decisions based on the good goat.
It's like playing chess, but secretly you win if you take the opposing queen. Your opponent plays the same, because they don't know anything is different. But your strategy should change because your victory condition changed. (Maybe you aim for an early queen exchange.)
Back to the Monty Hall problem, let's say there's 1,000,002 doors, and 1 car, 1 good goat, and 1 million bad goats. You pick a door at random. The host opens 1 million other doors, all of which have bad goats behind them. The only doors left are the one you picked, and this one door way over there that you didn't even notice. That other door almost certainly has the car, which means that (by a amazing coincidence) the door you picked almost certainly has the good goat. So if you want the car, you switch doors, but if you want the good goat, you keep the same door.
(It'd be much more likely for one of the million doors that the host opened to have the good goat, but sometimes we get lucky.)
No, you did it wrong. In your scenario, the host is FORCED to open a million doors (and only show bad goats). This can ONLY happen if you first luckily picked the good goat or the car. The entire scenario is very, very unlikely; if you randomly pick your door, almost certainly you would have picked a bad goat and then the host would have revealed the good goat elsewhere. But that got ruled out by the scenario! All those likely worlds didn't happen. You are forced into a world where your initial lucky pick MUST have been either the good goat or the car. And it's 50/50 which it was ... and so it would also be 50/50 if you switched. Whether you're interested in the good goat OR the car. Switching doesn't matter.
This is very different from the original Monty Hall problem. In the original problem, the goats are identical, so the host can ALWAYS find an extra goat to reveal, no matter what you pick first. That doesn't happen in this new scenario. If you "accidentally" pick the (or "a") bad goat ... the host CANNOT complete the new scenario. So you have to begin the analysis by RULING OUT any possible world where your first choice involves a bad goat. Those worlds didn't happen.
No, that person's math is wrong. If you picked goat A, the host will always reveal goat B. But if you picked the car, there's only a 50% chance the host reveals goat B. So the chance you picked goat A is 2 times the chance you picked the car - thus a 2/3 chance switching will get you the car and a 1/3 chance it doesn't.
Your solution contains an error - the 4 cases don't actually have equal probability.
Consider the original Monty Hall problem, but with numbered goats. It actually has 8 cases, not 6. In case the player chooses the car, Monty has 1/2 chance to reveal either goat. So the actual cases are:
c g1 g2 -> c g1 (1/12 probability)
c g1 g2 -> c g2 (1/12 probability)
c g2 g1 -> c g2 (1/12 probability)
c g2 g1 -> c g1 (1/12 probability)
g1 c g2 -> g1 c (1/6 probability)
g1 g2 c -> g1 c (1/6 probability)
g2 c g1 -> g2 c (1/6 probability)
g2 g1 c -> g2 c (1/6 probability)
When you add up probabilities, you still end up with 2/3 chance of a goat when not switching and 1/3 when switching.
In the revised problem, the initial probabilities are the same, but the reveal excludes every possibility where the good goat would have been revealed. So the cases are as follows:
c g* g -> c g* (1/12 initial probability, 1/6 normalized)
c g* g -> c g (excluded)
c g g* -> c g (excluded)
c g g* -> c g* (1/12 initial probability, 1/6 normalized)
g* c g -> g* c (1/6 initial probability, 1/3 normalized)
g* g c -> g* c (1/6 initial probability, 1/3 normalized)
g c g* -> g c (excluded)
g g* c -> g c (excluded)
So in the end, you still get 2/3 chance to get the goat when not switching and 1/3 when switching, just like in the original.
Assuming you're convinced that in the original Monty Hall problem you can increase your chance of winning (getting the car/million bucks/whatever), you don't actually need to do any probability calculations; they're already done, you just want the inverse.
It is obvious that with the new alternate win condition (getting the billionaire's goat) you maximise your chance of success by doing the opposite of what would increase your probability of getting the car.
The way I rationalized it is to say that because the host doesn't distinguish between the goats, then without loss of generality he picks the first goat door available.
c g g* -> c g* switch
c g* g -> NA (host would have shown g* - this is the tricky one)
My understanding is this problem condition says that:
1) Monty opens another door
2) Behind it is an ordinary goat
There is only one ordinary goat in this variant of the problem, so we can conclude from the condition that we have not chosen the door with the ordinary goat in any of the worlds in which the condition places us.
Hence, behind our door there is a car and a valuable goat with equal probability. Hence, it's 50/50.
Edit: no it's not. My thinking was: You can't have chosen the bad goat, since we assume Monty reveals it. Therefore if you chose the car, you need to switch, and if you chose the goodgoat, you need to stay. 50/50
It's wrong; following possibilities don't have the same probability:
1. You choose the car, Monty reveals the badgoat
2. You choose the goodgoat, Monty reveals the badgoat
If you choose car, Monty either reveals badgoat or goodgoat, one of which is excluded by the assumption. Therefore #1 has half the probability of #2
It seems to me that this reasoning is wrong. We should consider probability as a measure of our ignorance. That is, if a die fell and we didn't see it, the probability that it rolled a 3 is 1/6. If the die fell and we saw it, it is either 0 or 1.
In this case, our ignorance is reduced by the condition itself. We know a priori that Monty will not open the car, the prized goat, or the door we chose. This completely rules out situations in which we chose a door with an ordinary goat. In effect (due to some magical intervention by the author of the condition) we are choosing between two doors: the one with the car and the valuable goat. And the probability of choosing one of them is the same.
The point where we find out that monty won't reveal the good goat is causally downstream of the randomization of the doors. So it carries information about the doors.
No, I withdraw my objection. I still can't grasp it by intuition, but my own calculations show that player should not switch. So there's the Monty Hall version of the problem I broke down on!
I realized where my intuition was wrong! I had assumed that since the condition excluded the choice of a door with a regular goat, the total probability of the worlds available in the problem was 2/3. Since the choices of the door with the car and the door with the nice goat both have probability 1/3, it turns out that they are equally likely: (1/3)/(2/3)=1/2
But the condition rules out more than this! It also rules out “I chose the door with the car and Monty opened the door with the good goat”, which has probability 1/6. Then everything converges: the total probability is 1/2, the door with the good goat has probability (1/3)/(1/2)=2/3, and the option “the door with the car + opening the door with the bad goat” has probability (1/6)/(1/2)=1/3.
doesn't the total probability being 1/2 constitute the same rule out as choosing the door with the car and then the good goat being revealed?
that is, if we take Monty showing us a bad goat as a given, we can't use that given twice to estimate probabilities of other things, i thought.
alternately phrased, since seeing the good goat is ruled out, we can't use a scenario where we see the good goat to get a probability, is my intuition.
Edit: I think I get what you're saying, the phrasing just confused me you mean that given that we can rule out the situation in which you picked the car and Monty shows the good goat because that's not included in the problem, there has to be more opportunities to pick the good goat because you can't see it when Monty shows you a goat. I don't know why this is so confusing to read or explain, given that you're obvious intuition is to stay.
Yes, You haven’t chosen the ordinary goat. And if you have chosen the car Monty hasn’t shown you the valuable goat which would be unplayable anyway and also isn’t part of the initial constraints.
So there are two options - you are on the door with the valuable goat, or you have chosen the car. 50-50
My thought with the revised Monty Hall problem: we know from the normal one that you should switch if you want the car, which implies you should stay if you want the (other) goat. The only difference is that you can tell the goats apart, but I can't imagine adding that to the original would change anything.
I'm never sure why there is ever even a debate about this kind of thing. Not because the math and logic isn't tricky (it is), but because it's very straightforward to simulate this situation one hundred times with simple python code, so we could just... check.
In science, one shouldn't believe an experimental result for certain until it is confirmed by theory. One should also not believe a scientific theory until it is backed up by experiment.
In any case, testing it by experiment shows you the answer in this case, as Naremus demonstrated. But what do you learn, other than the answer?
I found it paradoxical that if you get at least 23 random people together that at least two will likely (>50%) share a birthday. When I delved into this to understand why, and understood its truth, I concluded that it is more likely, if one buys three easy-pick lottery tickets, it is more likely that one ticket will match another than for any of them to have the winning combination.
I don't disagree. But if you look at the other discussions, people do not just disagree about which is the right theory to reach the correct result, they also disagree about which is the correct result. And that just seems like a waste of brainpower, when it's so easy to check.
Also sometimes you just need the result for some practical purpose, for example to win the game described if you actually participate. This is less applicable here, since the entire point of a thought experiment is obviously to get you to think.
That's also the impression I got. Seems pretty straightforward to me. So I can't tell if I'm under-thinking it or if everyone else is over-thinking it.
There are two 'playable' options but they aren't equally likely to occur: having chosen the good goat initially is twice as likely as having picked the car initially, because half the worlds in which you picked the car and the host shows the good goat didn't happen per the problem definition, but could have happened because the host doesn't know not to pick the good goat.
Assuming the doors are random, and you pick door 1 (the door you pick doesn't change any probabilities), there are 6 possible configurations
G C B #+1/6
G B C #+1/6
B C G #X
B G C #X
C G B
C B G
The important bit, I hope you agree the last two worlds have a 1/3 chance of occurring (1/6 for each ordering) prior to the host revealing a door. Now, the last two break down (using [] to indicate the hosts random choice) into two further possible worlds based on the hosts random choice after your initial choice:
C [G] B #X
C G [B] #+1/12
and
C [B] G #+1/12
C B [G] #X
So adding up possible worlds that you could still be in, we get 1/3 you've picked the Good goat initially, and 1/6 that you picked the car and the host showed you the bad goat. Since having picked the good goat initially is more likely, we should stay.
You can do the same cheat than with the original problem, to make it more intuitive:
There are 1000 doors, with one car, one very special goat, and 998 not so special ones.
You choose a door and Monty opens 998, revealing 998 regular goats. Since Monty is working under the impression that you want the car, the probability that the car is in the door you didn't choose is 999/1000, so you should definitely not switch.
Monty doesn’t have enough information to open only doors with the rubbish goats, so you can’t generalise this problem in the same way as you could the original Monty test. In the original, Monty has to open only goat doors, in this case he doesn’t care about the valuable goat.
So in the original Monty is not going to show the car in any of the N -2 (998 here) doors he opens, in this case the valuable goat will appear most of the time in his opened doors.
Yes, Monty wouldn't care about the type of goat, and in most cases he would show it with the rest. But as in the 3 door problem, there just isn't a problem to solve if he shows you the good goat: you can't get the goat and just pick the (door most likely to have the) car. However, if he hasn't shown you the GOAT goat, it's more likely to be in the door you already picked.
This version of the problem seems to place us in a universe with six possible combinations, but the twist is that we actually find ourselves in one with only four possibilities: it is impossible that we initially chose the bad goat. In the original version, we don’t learn anything that changes the odds of switching; in this version, we don’t learn anything at all, although the game is phrased in a manner that leads us to believe that we have.
Play it this way: I’ll be Monty and start by opening the door with the bad goat and then let you pick another door. Now I’ll offer you the chance to switch. Rather obviously, you’ve learned nothing since your original choice and so the odds cannot have not changed.
The original version can be rephrased to make the correct choice obvious: I’ll let you choose one door, and then immediately give you the option of holding or switching for both other doors, one of which is certainly worthless. What fools us is thinking that being told precisely which door is worthless changes anything important.
In the classic formulation of the Monty Hall problem, there are three outcomes.
1. You initially pick car. Monty reveals a goat at random. If you stay you win, if you swap you lose.
2. You initially pick goat A. Monty reveals goat B. If you stay you lose, if you swap you win.
3. You initially pick goat B. Monty reveals goat A. If you stay you lose, if you swap you win.
So in 2 out of 3 equally likely scenarios you win if you swap.
In the revised version there are three different outcomes.
1. You initially pick good goat. Monty reveals bad goat. If you stay you win. If you swap you lose.
2. You initially pick bad goat. Monty reveals good goat. You cannot win.
3. You initially pick car. Monty reveals either good goat or bad goat with 50% probability. If he reveals good goat, you cannot win. If he reveals bad goat, you win if you swap and lose if you stay.
By revealing bad goat, Monty has inadvertently eliminated all of option 2's probability weight and half of option 3's weight. Those are worlds you could have been in but now know you are not. So you now know you are in either scenario 1 or scenario 3. But because scenario 3 has only a 50% chance of revealing the bad goat while scenario 1 has a 100% chance, you are twice as likely to be in scenario 1. If you stay you are therefore twice as likely to win compared to swapping.
What makes the Monty Hall problem so counterintuitive is that people tend to think about a probability as a property of the specific scenario. Eg. 3 doors 1 prize must always have the same probability. In actuality, a probability is a statement about *how much we know* about a scenario. In the original, Monty knows where the car is and will always reveal a goat. In the revised version, Monty does not know which goat is good and which is bad, and his actions reveal different information.
Of course the real best play is to try to win the car then offer to buy the good goat after the game is over.
A) 1/3 chance you picked the correct goat and the host showed you the wrong goat. The situation where you picked the correct goat and the host showed you the car cannot occur based on the rules. In 100% of cases here you will see the wrong goat, so seeing the wrong goat gives you no information on whether you picked the correct goat or the car.
B) 1/3 chance you picked the car, but when the host shows you the wrong goat, you can eliminate the scenario where you picked the car and the host showed you the correct goat. This is 50/50.
C) 1/3 chance you picked the wrong goat but when you see the wrong goat you can eliminate these scenarios.
Of the scenarios not eliminated by assumption, in 2 you picked the correct goat, in one you picked the wrong goat.
He's more likely to have shown you the normal goat in the "you picked special goat" situation (100%) than in the "you picked car" situation (50%) or the "you picked normal goat" situation (0%), so you do get to update your probabilities: You're more likely in the scenario where you're more likely to have seen what you actually saw. (Unlike in the original problem, where you see "goat" 100% of the time.)
It's actually equivalent to the original problem: 1/3 odds your original pick was car, 2/3 odds your original pick was _the other goat_.
I think the easiest way to express what's going on is with the odds form of bayes theorem.
In the original monty hall problem, say we hypothesize that we picked the right door from the start. We believe this to be true with 1:2 odds (since there's 1 car and 2 goats).
Then, monty shows us a goat. If we picked the car, that'll happen 100% of the time, so we don't adjust the 1. If we picked a goat, that'll also happen 100% of the time, so we don't adjust the 2. Thus, after showing us a goat, we still believe that our initial pick is correct with 1:2 odds. This implies that the *alternative* is correct with 2:1 odds, so we should switch.
Then, consider the revised problem.
There's 3 cases: We picked the special goat, we picked the normal goat, or we picked the car, each with equal odds, so 1:1:1.
Then Monty reveals the normal goat.
Under the hypothesis that we picked the special goat, this is unsurprising. That's what *must* happen, so we don't adjust.
Under the hypothesis that we picked the normal goat, this is impossible, so the odds go to 0.
Under the hypothesis that we picked the car, this is *surprising*. We'd expect to be shown the normal goat only half the time (we'd see the special goat the other half), so our 1 gets multiplied by 1/2.
After the information, our odds are at 1:0:0.5. We can re-normalize to 2:0:1, meaning that we expect there's a 2/3rds chance that our door has the special goat and thus we should stay.
I think the easiest way to solve the revised Monty Hall problem is to use the solution to the traditional Monty Hall problem.
No matter what you desire in your heart of hearts, since Monty doesn't know about it, it will always be the case that switching has a 2/3 chance of winning you the car. Therefore not switching has a 2/3 chance of winning you whichever goat has not been seen yet.
In the situation described, the goat that hasn't been seen yet is the valuable one, and so you have a 2/3 chance of getting it by not switching.
No, wrong. Because in the original, the host can ALWAYS open another goat door. But in the new scenario, if you originally picked the bad goat, it is no longer possible for the host to open a bad goat door. In the new scenario (unlike the original), that possibility cannot happen.
Kindly was right. The new problem is trivial to solve in terms of the old problem. The host never seeks to open a bad goat door. All goats are alike to him.
But does the host EVER open a door with a good goat? The description says no, it didn't happen. But it ... "could have" happened? You have to be careful in probabilities, with assigning weights to possible worlds that you know did not occur. In what precise sense "might" Monty have opened a good goat door -- given that we know he didn't?
It matters a lot what space of games you're drawing from. We KNOW that, in the games we're considering, you CANNOT have originally chosen the bad goat door. This is different from the original Monty Hall problem, where originally choosing the "bad goat" WAS a possibility. So the possibilities (potentially) have changed. You have to be careful, solving it "in terms of the old problem". It isn't the old problem any more.
Here's a possible repeated game: FIRST, Monty opens a door with a bad goat. THEN you choose from the remaining two doors. (One has a good goat, one has a car.) THEN Monty offers you the chance to switch. Do you agree, in this game, that the odds are 50/50, whether you switch or not? You could play it 1000 times, and you'll get the car 50%, and the good goat 50%, whether you switch or not.
You want to say that the game that we are talking about is different than the game in my previous paragraph. Different how, exactly? Here's one way it might be different: if you played the original Monty Hall problem, and your first choice was the bad goat, then the scenario described CANNOT happen. (Monty doesn't have a bad goat to show you.) So it must be clear that you cannot be playing the original game. The remaining question is: if you choose the car first, might Monty have shown you the good goat then? According to the puzzle, he didn't. But ... "could" he have? What does that actually mean? "Could" you have chosen the bad goat first? It seems, somehow, that you "couldn't" have chosen the bad goat ... yet Monty "could" have chosen the good goat (if you chose the car).
What exactly is the difference in analysis, between the "couldn't" (you first choosing the bad goat), and the "could" (Monty choosing the good goat when you choose the car first)? According to the scenario, neither one happened. Why is one still "possible", somehow, but the other isn't?
This only matters if you believe that the optimal strategy in the *original* Monty Hall problem depends on the quality of the goat behind the opened door.
To put it differently, suppose that we are in the scenario with the eccentric billionaire's goat, but you are philosophically opposed to helping billionaires retrieve their lost pets, and so both goats are equally worthless to you: you want the car. When Monty opens the door and you see the bad goat behind it, that's vaguely interesting but doesn't affect you in any way: by the standard Monty Hall strategy, you have a 2/3 chance of getting the car if you switch, and a 1/3 strategy of getting the (billionaire's pet) goat.
Similarly, there is a different world, in which Monty opens the door with the billionaire's pet goat behind it. Again, that's vaguely interesting but doesn't affect you in any way: by the standard Monty Hall strategy, you have a 2/3 chance of getting the car if you switch, and a 1/3 strategy of getting the (bat) goat.
The problem is that your last paragraph didn't happen, by assumption in the new scenario. So it matters a lot about why it didn't happen. It depends a bit on when you eliminate those possible worlds. If you do it before you analyze the scenario at all (originally choosing the bad goat can never be part of this new game), then it's only 50/50 for the remaining choices.
You're basically making an analysis based on a counterfactual that didn't happen (you choose the bad goat, and Monty shows the good goat). You're assigning it some weight, some probability that it "might have" happened -- even though we know it didn't.
That's where the disagreement is. With a single iteration, there aren't any probabilities at all. You either get the goat, or the car, 100%. Probabilities are about your state of knowledge, and they can only be verified by a repeated experiment where you set up some scenario over and over again, and then count the various outcomes.
What is the scenario that is being set up over and over again, in this case? If I run the example 1000 times, what are the 1000 examples? Does Monty show the good goat in some of those 1000 runs? Or not?
You need an "example generation machine", that can generate scenarios with outcomes you can count. This new problem is a little bit ambiguous about what the space of possible games is.
We're assuming that Monty doesn't know anything about the goats, so he is equally likely to choose either goat. Therefore the two scenarios are equally likely, and more importantly P(your door has car | Monty picks good goat) = P(your door has car | Monty picks bad goat).
I agree that if we don't assume this, then the answer to the entire problem is different. But I think it's unambiguous that we should assume it.
The difference is that the host opens a GOAT door, chosen uniformly at random from the two goats. The uniform part is the active ingredient here. In the original show, the two goats were indistinguishable, but if the host opens a goat at random then the two are still indistinguishable TO THE HOST, although not to you.
The thing about the car is that the host never opens a car door (they know where the car is after all) which gives you the 2/3 probability of the prize you really want which is guaranteed not to be a goat in the original case.
The revised problem as stated (you've already seen a goat but not the one you want) has cut off the branch in the probability tree where you get the good goat, so you're conditioning on seeing the other one.
If you switch, you still get probability 2/3 of a car. Therefore if you don't switch, you get probability 2/3 of "a goat" - but in this case you know it's the goat you want.
You pick a door. Monty randomly opens a door you didn't pick (not worrying about whether it has a goat or a car). He happens to reveal a door with a goat. Now what should you do?
Before you saw what you revealed, there's a 1/3 chance your first pick was the car (and he definitely reveals a goat) and a 1/3 chance your first pick was a goat and he reveals a goat, and a 1/3 chance your first pick was a goat and he reveals the car.
We can eliminate the last one, now that we've seen the goat, so it's 50/50.
It's interesting that it's different again when he definitely chooses not to reveal the car, not knowing that one of the goats is the prize you want!
How do you feel about the original Monty Hall problem? If we apply the same chart but instead want the car instead of either G or B, then stay/switch is also 50/50, right?
The lesson I take is from both the original and this one is that probability is not about it's about information. Think of these as a Bayesian. (In fact, a straightforward application of Bayes Theorem really shows it here.)
In this case, when you get lucky enough to see the bad goat, that tells you that you probably picked the good goat. Worlds in which Monty reveals the bad goat are simply less common when the bad goat is not the one you chose.
This is what I was thinking (50/50 at the end), but I wrote a simulation that says you should only switch 1/3 of the time at the end: https://pastebin.com/YtvBWpc5
Another way of working the problem that works out the same is to reduce it to the already-known solution to the standard Monty Hall problem, that there is a 2/3 chance that the car is behind the last door and a 1/3 chance the goat is there. Your door has a 2/3 chance of goat and a 1/3 chance of car. Since the still-hidden goat is Rosebud, you should stay with your door to maximize your chance of finding him.
Your door is the bad goat, monty shows you the good goat(he wont show you the car) and you lose immediately 1/3
Your door is the car 50/50 monty shows you the good goat 1/6
Your door is the car monty shows the bad goat. 1/6
The switch strategy has 1/6 winning. The keep strategy has 1/3 winning. 1/2 you lose immediately. Conditional on seeing the bad goat, you have 2/3 chance winning by not switching.
This feels bizarre to me. Why would the hosts intentions matter? But I think your reasoning is correct.
When you see new information, it's not enough just to eliminate impossible options.
You also have to take into account, that the options that made the new evidence more likely, are themselves more likely than options that had a lesser chance of showing you that information.
Having the host open a goat door is more likely if you originally chose a car door, than if you chose a goat door. Which bumps the probability that you originally chose a car door from 1/3 to 1/2.
It's not precisely that the host's *intentions* matter, but rather that their *algorithm* matters. In order to understand the significance of the information the host reveals, you need to understand what *could* have been revealed. Since the host's actions depend on what door you originally picked, you can't calculate the counterfactual observations without knowing the host's decision algorithm.
For example, suppose the hosts acts like this:
- If your original pick was the car, the host reveals a goat and asks if you want to switch
- If your original pick was a goat, the host treats that as your final choice and does not give you the opportunity to switch
In this case, if the host reveals a goat, then there is 100% chance that you already picked the car, and a 0% chance that you will get the car if you switch.
You can make it even more extreme. Say that the host says, “I will open the lowest-number door that is not the one you pick, and not the one with the car”. If you pick 1 and the host opens 3, it’s obvious that 2 has the car. But if you pick 1 and the host opens 2, then you’re now 50/50.
> You also have to take into account, that the options that made the new evidence more likely, are themselves more likely than options that had a lesser chance of showing you that information.
That's true, but you're not doing it.
> Having the host open a goat door is more likely if you originally chose a car door, than if you chose a goat door.
That's false. By definition, the probability that the host opens a goat door is 100% in all cases.
I just want to commend the OP for rephrasing to at least implicitly specify the host's decision algorithm, The host opens a door showing a goat "as is traditional", meaning always or nearly always. This rules out algorithms like "Host shows you a goat and offers a chance to switch if you picked the car, but doesn't open any door or give you a chance to switch, because cars are expensive and goats are cheap". Or, "Host decides what to do based on what he feels the audience would most enjoy, and since you're a telegenic and enthusiastic young woman he thinks they'd be happiest if he nudged you towards switching your choice to the car (which isn't being paid out of *his* salary)".
The original Monty Hall problem, what divided the intertubes back in the day, only specified that the host revealed a goat and offered a switch on this one occasion, and much of the dispute was based on differing assumptions as to the algorithm.
And, of course, this was when Monty Hall was still alive and could tell you what the algorithm was if you asked. It was not, "host always opens a door to reveal a goat".
Agreed, I've always felt the problem tells us more about how easy it is make, rely on, and be confused by unspoken assumptions, than it does about probability.
As Jeffrey's says in the Theory Of Probability:
"The most beneficial result that I can hope for as a consequence of this work is that more attention will be paid to the precise statement of the alternatives involved in the questions asked. It is sometimes considered a paradox that the answer depends not only on the observations but on the question: it should be a platitude."
Cool. I didnt realize "the monty hall problem" was not in fact a regularly occuring game on the show. Never watched it. History has been rewritten by this brain teaser. Everyone will think, as I did, that was the whole show
> The original Monty Hall problem, what divided the intertubes back in the day, only specified that the host revealed a goat and offered a switch on this one occasion, and much of the dispute was based on differing assumptions as to the algorithm.
This is just you not knowing what you're talking about. The problem is older than popular awareness of the internet. It divided the letter-writing public.
> Savant was asked the following question in her September 9, 1990, column:
> Suppose you're on a game show, and you're given the choice of three doors. Behind one door is a car, behind the others, goats. You pick a door, say #1, and the host, who knows what's behind the doors, opens another door, say #3, which has a goat. He says to you, "Do you want to pick door #2?" Is it to your advantage to switch your choice of doors?
> This question is called the Monty Hall problem due to its resembling scenarios on the game show Let's Make a Deal, hosted by Monty Hall. It was a known logic problem before it was used in "Ask Marilyn".
> If the host merely selects a door at random, the question is likewise very different from the standard version. Savant addressed these issues by writing the following in Parade magazine, "the original answer defines certain conditions, the most significant of which is that the host always opens a losing door on purpose. Anything else is a different question."
And her second followup pointed out:
>> We've received thousands of letters, and of the people who performed the experiment by hand as described, the results are close to unanimous: you win twice as often when you change doors. Nearly 100% of those readers now believe it pays to switch. (One is an eighth-grade math teacher who, despite data clearly supporting the position, simply refuses to believe it!)
>> But many people tried performing similar experiments on computers, fearlessly programming them in hundreds of different ways. Not surprisingly, they fared a little less well. Even so, about 97% of them now believe it pays to switch.
>> And a very small percentage of readers feel convinced that the furor is resulting from people not realizing that the host is opening a losing door on purpose. (But they haven't read my mail! The great majority of people understand the conditions perfectly.)
Isn't this version of the problem simpler than the original one? You've distinguished the goats, so now there are just 3 doors with 3 prizes. You've already been told that Monty opens a door to reveal a prize you're not interested in so you should stick to win 2/3 of the time. (Because, as in the classic Monty Hall problem, the car is behind the third door 2/3 of the time.)
Your point is absolutely right though - a lot of the confusion on these problems hinges on Monty's behaviour not being completely specified. I think in your scenario the key information is that Monty doesn't know/care where the car is, so in opening a door he's not giving you any information about where the car is or isn't. You can swap the order of events: Monty reveals a goat, you choose from the remaining doors, and have a 50/50 chance of winning. Conversely if Monty avoids the car, he reveals information and improves your chances.
> You've already been told that Monty opens a door to reveal a prize you're not interested in so you should stick to win 2/3 of the time.
No, the opposite. This is the same as the original problem, with no differences except in which prize you want. Monty opens a door to reveal a goat. What he reveals is a prize you don't want, but that's not why he opened the door.
The results are identical to the original problem, because it's the same problem. After Monty opens the one door, the final door has a car 2/3 of the time, and your door has a car 1/3 of the time. Since you don't want the car, you don't switch.
I don’t know if non-Anglo countries also believe this but the idea that being cold gives you a cold is very widely held here despite people also knowing that it’s transmitted by a virus. I know there’s some research about seasonality etc but a lot of people seem to go way beyond that
…the cold water thing isn’t entirely wrong either though?- drinking the local tap water without boiling it in places where that belief is prevalent carries a risk of making you ill? It’s lumping the fan thing in there that seems unfair - that’s the belief that isn’t like the others.
There are two stable equilibria. If everyone believes that drinking non-boiled tap water is safe, then you want regulations in place regarding the amount of bacteria in the water, so it is indeed safe.
If nobody thinks it is safe, then nobody cares and it will probably be somewhat unsafe by default.
(Source: am Chinese, but don't have a scholarly understanding of this or anything) I do think it's more than just believing that the water needs to be boiled in order to be safe. Like if you boiled water, let it cool, and added ice to it, many Chinese people would probably think that it would hurt them to drink it. Like it will affect your digestion or fertility.
I suspect that the second thing you mentioned is the real origin of the belief that cold causes colds, and the first was something that people came up with afterward.
I think it's got a lot more to do with the fact that people get colds at the stat of winter. (Because the folk belief is true, at least in certain ways.)
What makes your nose run is the counter-current-heat-exchange-system that helps you avoid wasting lung water. The air you exhale is warm and full of water; in the nose it meets cold blood through very thin nose walls, cooling down and making water to gather into droplets. Some of those run out of your nose, as a side effect. The system works more strongly when the air (and the nose) are colder. The nose running when it's cold is not a disease, the nose is not producing more mucus. Heat exchange systems like this are common among mammals.
This in interesting but are you sure? Would the nose stop running (would the nose stop, haha) if you started breathing through your mouth? Or if you either inhaled or exhaled through your mouth consistently? If this is true, it would have to stop (maybe after drying the nostrils). I may test that if it gets cold again.
It's standard animal physiology. You can test if your nose only starts running outdoors, but not indoors (it must be actually warm indoors; some people keep 16 centigrade at home in winter, this can make the nose run). If it also runs at a warm temperature, you probably have an infection, allergy, or other irritation in the nose, producing extra mucus.
Good to know. I think you missed part of my point: the body doesn't have a reserve of cold blood, right? If the nose is cold, it's because the nose is cold. So within a couple minutes of coming indoors, the dehumidifier effect should stop. And give it a few minutes more for the nasal passages to drip a bit. Though to know how long this takes, you'd have to get a reference measurement: use some nasal spray and time how long you nose takes you dry.
Yeah, I don't know exactly how long it will keep dripping after coming indoors. Probably it's also slightly different person-by-person. Some have naturally colder noses even indoors, etc.
I don't know about the US, but in Europe people believe that it's ok to blow your nose when you have a cold. Instead of swallowing it or spitting it on the street. In medical terms, that's gross.
We find it gross because our parents have told us that it's gross (unless we do it on the soccer place, if you live in Europe).
Mucus on the street will not infect anyone. It does not find its way to the faces of people, and the street is a terrible place for germs to survive.
Asian people don't mind anyone spitting on the street, but they find the thought abhorrent to blow mucus into a handkerchief, put this into your pocket where it has perfect breeding conditions, touch it with your hand and then perhaps you even want to shake hands with them. From hygienic perspective, they are right that each of these steps is gross.
In the U.S., standard practice is to blow your nose into tissue paper made for that purpose and then throw it in the trash. And, ideally, washing your hands afterward.
This could also be related to the Asian belief that shoes worn outside are so absolutely disgusting that they can't be tolerated inside, not even for a moment, not even for visitors. If the streets are expected to be covered with other people's snot then this makes more sense.
So, Asia is a big place (cit?). In Japan people do not spit anywhere. On top of that a common advertising tactic in winter is to hand out tissues with ads on them; so people are doing something with those tissues. I'm pretty sure spitting is illegal in Singapore.
If your thinking about China only, then there are reasons outside of cleanliness to consider, like weird hangovers from the cultural revolution.
Most Germanic languages call the disease something to do with low temperature. Also Greek, at least some Slavic languages and some Romance languages, and the Japanese word for the disease (kaze) means 'wind'. The Korean word isn't related to weather, but they also believe that you get it from the cold. So this seems pretty common for people living in places that get cold.
I believe the British version of fan death is "non-bio" (i.e. enzyme-free) laundry powder - lots of people here believe that the enzymes in "bio" washing powder cause them skin problems, and every supermarket has both "bio" and "non-bio" washing powder for sale (if anything, the non-bio stuff is more common). It's my understanding that this is total nonsense, even for babies, and that in most countries non-bio washing powder is either unavailable or only available in kooky health stores. Can non-Brits confirm?
I lived in the UK for my first 30 years. And the thing we were taught at school is that the enzymes in "bio" got into the waste water and were somehow worse for the environment. I'd never heard the skin problem angle.
But now you've said it - it seems weird to believe that bio washing powder is extra polluting, but that it shouldn't be banned - given how much Brits love banning stuff.
Huh, I've never heard that. I've heard that phosphates in detergent can cause environmental problems, but according to Wikipedia that's true (and led to them being banned in the EU and US): https://en.wikipedia.org/wiki/Phosphates_in_detergent
I mean, at least it's plausible though. The idea that certain unspecified chemicals rubbing up against your skin all day can cause skin problems sounds pretty reasonable to me as an educated non-specialist, whereas it's hard to come up with a plausible mechanism for fan death.
Some people do have allergies to specific detergents, including "bio" ones. So this is a real thing, the only question is how prevalent it is. I do know of one case in my extended family where a child got a rash after the family, which typically uses non-bio, accidentally bought bio detergent from the same brand (the other children in the family were not affected).
In my experience it’s less the type of detergent and more to do with how well it has been rinsed from the clothes. Any soap will irritate the skin if not properly rinsed.
There are at least 2 theories why being cold would increase the risk of infection:
- the body is shifting it‘s energy expenditure away from keeping up the immune system and towards heat production, as well as reducing the bloodflow to external surfaces like the mucosal membranes in your nose and thereby also reducing the effectiveness of your immune response.
- typical cold viruses are evolutionary optimized to reproduce in the upper air tracts instead of the lower parts or even the lungs. This makes an infection much easier on the body so you can still run around and infect a lot of other people, thereby increasing the fitness of the virus. The virus recognises where it is (lung or nose) via the surrounding temperature - if it‘s cold, it must be in the nose, so „low temperature“ is its signal to start reproducing.
We are constantly in contact with all kinds of viruses anyway, at least the common cold ones. Whether you actually get sick is more of a question of how the tug of war between germs and immune system ends. I also think i remember reading about experiments where researchers could influence the susceptibility to catching a cold by cooling down the feet of their test subjects while smearing cold viruses into their noses.
The medical literature on this isn’t certain. Mostly they will say that cold isn’t a cause of a cold but agree the immune system could be compromised.
I think that this is one case where old wives tales are probably true, despite the medical lack of consensus. Remember that the normal temperature in the 19C (and presumably before) was 1C higher than today, so they were always fighting something, getting very cold or wet would then cause the sickness to become visible.
You need to be pretty darn cold for the immune system to get compromised, for the most part when you're regularly cold, you just consume more kcals, that's it
If I recall, the reason is that cold air is much more dry, so the membranes in your nose produce more moisture to avoid getting all dried out and frozen, but this moisture also makes it easier for airborne respiratory viruses to germinate (wrong term, I know, but you get my point) and create a local infection.
Good question. Not sure, but maybe the heat and stronger sunlight cause virus particles to break down more quickly, like Covid. The cold seems to be more potent than just the dryness, too. My nose starts running pretty quickly when the temperature drops below 40F. Not so on trips to dry warm places like California or Nevada. I just get thirsty faster.
Also, these don't even have to be intrinsically important. I'll explain; because epidemics are cyclical in nature, and people once infected become immune for a while, even if a given season (here, winter) has only a small advantage in virus replication , it may turn out that it gets a very large share of all epidemics.
I don't recall any of the polar explorers to ever have caught cold while in the iced lands, and writing about it. There have to be other vectors of infection around you for the disease to get to you.
(I know that cold is infectious. But, in theory, it could be your own germs that colonize the respiratory tract suddenly going haywire.)
It is almost certainly true that actually being cold ever so slightly increases your odds of getting sick as it does divert energy from other systems to more rapid heat generation.
Steven Pinker recently linked to evidence that cold undermines the immune system in the nasal passages, making people more susceptible to viruses. The old studies that supposedly “debunked” this “myth” put people in cold rooms with no exposure to viruses (I hear second hand), which isn’t a charitable interpretation of the folk belief. Yes, the cold doesn’t literally give you a cold, but yes, it increases the chances of you contracting a virus.
I was very paranoid during Covid, avoiding situations with people indoors. (The risk or catching respiratory infection outdoors is dramatically lower). I occassionally ate outdoors with my small children in winter or damp weather. They did not catch cold from that.
I still am not entirely convinced that being cold doesn't cause *some* kind of cold-like illness, even if it's not a cold in particular, considering the time I got a cold almost immediately after breathing too much cold air during a morning track practice (as in it was starting to hurt my throat and lungs); or the time that I stood waiting for the bus for an hour in -20℃ weather, and that very night I got a fever for the first time in years and slept like the dead for 15 hours. It would take a lot to convince me that the cold didn't play some more or less direct role in causing those illnesses. (Although I'm certainly open to the possibility that they weren't caused by pathogens.)
Re: Turkey and fertility - I would think that Turkey is *more* correlated with western social trends than China or Korea or India are (being almost in Europe), even if less than Latin America and Eastern Europe. The fact that fertility is falling in *all* these areas suggests this isn't about westernness at all - it's probably about some broader techno-economic trends (e.g., various things that make the difficulty of achieving a child-free or a child-ful lifestyle harder or easier, or things that make the appeal of a child-free or a child-ful lifestyle greater or lesser, or things that make people more choosy before they settle down with a partner, or whatever - better TV and smartphones and GPS-based hookup apps all seem like they would cross cultural borders).
Pretty much every country in the world has a TFR graph that looks like this \. Some have a steeper slope, some have it more gradual but its declining pretty much everywhere. Only Central Asia seems to be an exception but no idea why. Don't think it Islam though. Central Asians are on the lower end of Islamic religiousity spectrum. Even the highly religious Muslim countries like Afghanistan, Niger, Yemen all have declining TFRs though at a more slower rate.
I don’t think that gives an explanation of why TFR has been falling in most poor countries, even as it has risen in Kazakhstan and Uzbekistan. That might explain why all these countries are higher than more middle income countries, but not the direction of change.
TFR is falling everywhere now because of microplastics, which can now be found all over the world. I don't have a citation for this, because I'm just making it up now.
That’s interesting - I hadn’t realize that TFR was actually increasing in Kazakhstan and Uzbekistan! But Tajikistan and Turkmenistan seem to have turned down again after an increase that started at the same time as those others. And I’m surprised that Afghanistan has been decreasing as fast as Pakistan and India - I had thought they were staying stubbornly high for a long time!
The increase in both Kazakhstan and Uzbekistan is interesting, because they've been on divergent paths culturally - Uzbekistan has really started leaning more into Arabic-style Islamic cultural stuff, while Kazakhstan is still really secular IIRC (it helps that it has a much better economy).
I am not a stats guy so don't know if this makes sense. But I read a while ago that its caused by the emigration of Russians from Central Asia. Basically before 1990, Russians constituted a large share of the Central Asian population. They had a much lower fertility rate than the locals. In the 90s and 2000s, Russians began to leave Central Asia, and as their proportion dropped, it artificially increased the TFR of Central Asian countries as a whole even though the locals TFR was also gradually decreasing. Tajikistan and Turkmenistan have almost reached the floor of Russian population, so now their TFR has started to decline again. I didn't find anything else that talked about this as a factor so don't know if it is true.
The start of Afghanistan's decline in TFR coincides with America occupying the country and imposing a revolution in gender roles- we'll have to see if it ends up being a permanent change now that the Americans are gone. I hope so.
Its been more than 3 years since the Taliban took over the country, and the TFR continued to decline in 2022, 2023, and 2024. Taliban could take away every single human right from women and I think the TFR would still decline. Because even the most conservative Islamist nutjob man still would have to pay for all the children his wife/wives gives birth too. The wife can't work and if you have like 10 children, who is going to feed them all? Its not just food. He still knows nice clothes, household goods, things in general exist elsewhere and if he has one less child, he could maybe have slightly better quality of life. Modernity spares no one.
I think it's a combination of declining childhood mortality rates and reliable contraception. It's easier to manage your number of children and much fewer of them are dying before adulthood - that's why fertility decline is the slowest in ultra-poor parts of subsaharan Africa.
I think Kazakhstan's TFR rose because the low-TFR Europeans (mostly Russians, Germans, and some Ukrainians) emigrated after the end of communism, and the remaining population was Central Asian Muslims who already had somewhat higher TFR. it's a selection effect. Kazakhstan was around 7% German at one point.
Central Asian countries aren't actually that 'poor' by global standards, except I think Tajikistan.
re. specifically the "economic trends" side: Turkey might be a bit of special case because of just how poorly its economy is being managed. In the time range shown (2016 - 2023), the Turkish Lira lost about 85% of its value relative to the USD, current inflation is ~50% and unemployment looks like ~10%.
To the extent that fertility is affected by difficulty to financially stable adult, becoming financially stable young adult in Turkey seems particularly difficult.
I know there are lots of Turks who immigrated to Germany (not Greece, too much bad blood I assume), originally as guest-workers but they tended to stay. They've worked out better than many other immigrants from Islamic countries in Europe, perhaps because Turkey was ruled by secularists when that first wave came over, or perhaps Turkey being different caused it to be ruled by secularists. At any rate, there are clear differences between Turkish vs North African immigrants in Brussels.
Turkish immigrants to Germany came from the most backwards and Islamist parts of the country. They are big supporters of Erdogan despite many being 3rd generation at this point. They only seem better due to the mass migration of even more religious Muslims from 2010s onwards.
I don't think that this has to do with religion. Yes, North African immigrants cause a lot more troubles than Turkish immigrants, but Syrian immigrants also cause very little trouble despite being religious.
The Turkish immigration waves were decades before the ones from North Africa (at least for Germany), so those are very different demographics. Perhaps a bit closer in terms of demographics are the Romanian immigrants from late 90s and early 00s, and I think those were the trouble-makers of their time, despite being orthodox Christians.
I think the more important aspect is that immigrants from North Africa (and from Romania at that time) come in big family clans which act like mafia and try to build criminal networks, while Syrian refugees tend to have individual backgrounds.
Hum? I'm surprised by this, I hear this for the first time. Are you talking about Germany?
I think it has been consistent over years that refugees from Syria commit few crimes in Germany, much less than refugees from other countries, and massively less than immigrants from Maghreb. They still commit somewhat more crimes than the average population, but most of this is demographics: most of them are young men, and their crime level is not much above the level of other young men in Germany.
I googled for "Kriminalität von syrischen Flüchtlingen", and the first 2-3 hits seem to confirm this (links are in German).
I also found the report of the BKA (German national crime authority) from 2023. It mentions several problematic nations of origin (mainly Maghreb+Georgia, for some specific crimes also Gambia, Nigeria, Somalia, Albania, Kosovo and Serbia), but they don't mention Syria.
I guess you mean the sexual assaults during New Year's Eve in Cologne. Those were African immigrants, as far as we can tell. At least among the suspects that could be identified. "Two-thirds were originally from Morocco or Algeria."
"Most of those 120 (identified suspects) had come from North Africa."
Yes, just like it’s big news that Los Angeles has fires because of the delta smelt. Stories get slightly modified (whether intentionally or accidentally) and when a version taps into the zeitgeist, it goes viral, even if it’s not right.
"The fact that fertility is falling in *all* these areas suggests this isn't about westernness at all"
No it doesn't. The medium through which the ideological pathogen that causes low fertility (Progressivism, specficially its feminism tentacle) spread is *academia*, which is the world's only near-global monoculture. It originated in the West, even though it has now spread beyond it.
I would have thought that Hollywood and Bollywood were more global than academia, which is still relatively limited in its reach even in Europe and North America, where barely half of people have any interaction with it.
Given how deeply has fertility fallen in places like rural Serbia and Russia, there must be other factors at play. Not many Tumblr-feminist academic types in Kragujevac or Tula.
On ChatGPT and the environment, my calculation is simple. I figure that anything that releases CO2 costs money, and anything that wastes water costs money. My understanding is that OpenAI is currently operating at -200% profitability, so that they are spending "only" three times as much money as they are taking in. So if I'm spending $20 a month on them, then the maximum amount of pollution I can be causing is like $60 of gasoline or water use. Not great, but not as bad as some other things I do. And a free customer can't be causing too much at all.
To add to this: through their API they charge $2.50 for 1 million input tokens on ChatGPT 4o. 1M tokens is roughly the length of the series A Song of Ice and Fire. Far more than I can imagine most people using in a month, even accounting for how your conversation is repeatedly resubmitted.
I'd also guess their free users outnumber their paid users by a huge margin and that's part of the reason for their low profitability (along with capex and training ever larger models). The inference itself is probably fairly cheap per query.
Money ≈ impact is a reasonable order of magnitude ballpark, but definitely with an emphasis on "order of magnitude". IANA expert but from what few LCAs I've done, the "kg CO₂ eq. per USD" ratio hovers around 0.5 kg/$ for commodities (metals, wood), but can be an order of magnitude or more lower for end products (phones, dishwashers), or nearly an order of magnitude higher for direct emitters (concrete, petrol). So in terms of climate impact, $20 spent on one thing might be closer to $2 or $200 spent on another.
Definitely true. But it’s more that I have an upper bound to sanity check claims about how environmentally damaging AI products are. If I’m spending $20 a month on ChatGPT and $20 a month on Claude, I’m at least pretty confident that my impact on the environment is less than spending $40 a month on gasoline - not necessarily great, but not the hugely destructive thing people sometimes make it out to be.
Isn't the whole concern about CO2 emissions that they are an externality whose harm is not factored into profitability calculations? That said, it is unlikely that the harm they do is *vastly* higher than the profit they provide for.
Yeah, I’m not saying that the harm of $20 of gasoline emissions is equal to $20 of harm. But I think it’s plausible that whatever harm that is, it’s an upper bound on the harm that is done by $20 of ChatGPT, because at least some of the money on ChatGPT is spent on things other than burning carbon.
Re #39 and the morality of ozempic vs willpower - doesn't it matter that there are a lot of people who are thin with no application of willpower, and only a few people who are thin due to heavy application of willpower?
It's because of the other side - being fat is still a sin. In the old days it was the sin of Gluttony. Now we've done away with sins, but there are still things we disapprove of, and being fat is one of them.
So (speaking as a Person of Amplitude) you do get the social, media, and medical harrumphing about vice and lack of willpower and laziness and stupidity and all that good stuff, often packaged in "but we just *care* about you and your health" but increasingly with no nice packaging; the obesity epidemic that costs the economy so much and burdens the already over-stretched health service means being fat is Evil and fat people are Evil.
So you need to scold and scorn fat people. No willpower? No self-discipline? Look at these people who are thin, they can manage to control their base appetites, why don't you emulate them, you sinner?
So if you get a thin person who says "No, I'm thin just because", that undercuts the moral element. What? No sacrifice, no self-discipline, no moral superiority? That means that maybe, if you can be thin 'just because', then maybe you can be fat 'just because' (yes, I know - "eat less, exercise more", "calories in, calories out" and all the rest of it).
But maybe for some people it *is* harder, and not because they're lazy and stupid and greedy and Evil. Maybe fatness is not a question of moral superiority after all?
No, this cannot be, because that would leave fat people off the hook! They'd take it as an excuse! So naturally thin people *must* have a story of only eating one leaf of lettuce and a glass of water for lunch, then running for twenty miles to burn off the calories!
> So if you get a thin person who says "No, I'm thin just because", that undercuts the moral element. What? No sacrifice, no self-discipline, no moral superiority? That means that maybe, if you can be thin 'just because', then maybe you can be fat 'just because' (yes, I know - "eat less, exercise more", "calories in, calories out" and all the rest of it).
As someone who stays thin with little effort, I'm inclined towards biological explanations of obesity for exactly that reason.
To the extent people comment about it, everyone assumes I must be thin due to "working out" or "dieting", rather than just...genetically predisposed to not put weight on easily. I've learned not to press that point, cause it seems to make people rather uncomfortable. The same with insisting "no, actually I eat whatever and whenever, feel no compulsion to snack, and genuinely dislike sweet stuff". They'll keep poking for a sinful caveat (which is thankfully easy to give, saying you like salty/savoury is a good cover, even if I mean shellfish and they mean potato chips). It just seems like a very alien concept to most people, that there are ordered individuals who don't suffer from pitfalls like cravings or overeating, and mostly didn't have to apply heroic-level efforts to get there. I think that's why GLP-1s bother such folks: it feels like cheating, cause Everybody Knows you can't be thin in current_year without major sacrifice, divine intervention, lots of struggle, etc. Systemic forces and an obeisogenic environment, defeated by an artificial pill from Big Pharma? That's the master's tools, man...to misquote Adam Serwer, the suffering is the point, when it comes to weight loss.
(I also think people mistake good habits for effortful willpower...Father taught me to brew my own cuppa at home when I was like 10, so that's how I've always done coffee, and it's perfectly satisfactory decades later. Hundreds of dollars and thousands of calories avoided yearly by not getting premium mediocre swill at Starbucks on the regular. Generalize and solve for the equilibrium - it's way easier to not dig a hole in the first place than to climb out of a pit later. Avoid temptation, don't fight it.)
>I also think people mistake good habits for effortful willpower.
Well, acquiring good habits can certainly take plenty of willpower, particularly if your circumstances make that difficult/inconvenient. Maintaining them once acquired is much easier, and I agree that healthy lifestyle propaganda should focus on that more.
While genetics do play some role I think habits are more important than most are aware. My parents, grandparents, siblings, aunts, uncles, and cousins all grew a spare tire around age 20-30. From my genetics, one might expect I would do so as well. But luckily at 18 I went to a fancy college where by comparison food was cheap, and picked up good habits (better to waste food than to waste health). Had I first gotten fat and then tried to change habits, I think my body would have shifted into a different metabolism and it wouldn't have worked.
I haven't heard that particular phrasing before, but rather like it. Personal mantra has been stuck on "food waste is a cardinal sin" for a long time now, so any antimemes that help unwind such negative overreactions are...helpful. Kind of at a Copenhagen interpretation of eating compromise: I can't be blamed for wasting food if I never personally touch it, so it's better not to try new stuff if there's a chance I won't like it. Not the best tradeoff for trying to move out of autist gastric comfort zones, e.g. eat plain dry Cheerios for dinner cause they can't hurt me. (Although possibly adaptive for the modern food environment, there's...a lot of incredibly weird artificed shit out there being passed off as "food" these days. Pizza flavoured potato chips? That's a no for me, dogg...)
I would love if we had the IQ pill. Even in a world where I don't benefit from the IQ pill, such that people who have a naturally higher IQ than mine still exist, but there's a new 'IQ floor' at my IQ I'd be happy with that. I feel like I get enjoyment out of intellectual sharing with peers, and disappointment when a conversation is one-sided.
Yeah plus, wouldn't it be better if the people who do spend a ton of time, effort, and "willpower" on not being fat could have those resources freed up to be directed somewhere else.
I interpreted that as 'I am Spartacus' - the evidence that Dittmann is a real guy and the owner of the account is pretty convincing, but Elon sincerely believes that doxxing is bad, so he banned everyone involved, suppressed the link, and lied about the answer so you wouldn't feel tempted to click it (since the headline doesn't directly say).
I couldn't do it in my head so I had to "cheat" and write down Bayes' theorem. But here it is:
Let's say that World 1 (W1) is the possible state of the world in which you already have the special goat, and you want to know the conditional probability that you're in W1 given the reveal of the ordinary goat (R).
p(W1 | R) = p(R | W1) * p(W1) / p(R)
p(W1) = prior probability of W1 with no additional knowledge = 1/3
p(R) = prior probability that the host will reveal the ordinary goat rather than the special goal = 1/2
p(R | W1) = probability the host will reveal the ordinary goat conditional on being in W1 = 1 (he can't reveal the special goat in W1 because you have it)
Then just plug everything in:
p(W1 | R) = 1 * (1/3) / (1/2) = 2/3 chance you already have the special goat, and should stick with it.
The intuition behind is that the host is more likely to reveal an ordinary goat in the world where you're already holding the special goat than in the world where you're not, so it is a useful piece of information.
If one remembers the solution for the original Monty Hall problem, it becomes easy.
In the original problem, if you switch, you have a 2/3 chance of getting the price.
In the special goat problem, nothing regarding the regular prize has changed, so it still has a probability of 2/3 of being behind the door you did not pick. So to get the goat you want to reverse your strategy.
The way I find the most intuitive to get to the solution of the (really fun!) Monty Hall variation knowing the original is basically:
In the original Monty Hall problem, I think there is a 2/3 chance I have a door with a goat, and therefore 1/3 chance the other closed door has a goat.
That is still true in the new variation! However, now since I can see the normal goat, if I have a door with a goat, it must be the special goat. So there's a 2/3 chance I have the special goat.
But the "this is still true" has a big "feels intuitively right to me but [citation needed]".
----------------------------------------------
I think the rigorous way to describe it is more like:
I have a 1/3 chance to pick Car (C), Normal Goat (NG), or Special Goat (SG) at the beginning. Then, Monty will reveal a goat. This means we have the following scenarios with the following probabilities:
(1/6) Me: C | Monty: NG
(1/6) Me: C | Monty: SG *
(1/3) Me: NG | Monty: SG *
(1/3) Me: SG | Monty: NG
But the starred options are impossible because I can see Monty did not reveal the special goat. So staying gives me a (1/3) / (1/3 + 1/6) = 2/3 chance of getting the special goat.
I still feel like my quote-unquote "rigorous" solution is missing something here because despite being fairly confident it's correct, it feels hand-wavy to say "we know the starred scenarios are impossible so we can just vanish their probability mass into thin air and sum the rest of them". I think it'd be more convincing with a visualization.
My answer to the Monty Hall twist, before looking at other solutions:
Let's walk through all the scenarios:
1. You select the Car door. Monty randomly shows one of the goats. The problem specifies that, by chance, we know we are in the branch where Monty shows the bad-goat, so the Good-Goat is behind the other door. Here we get the Good-Goat if we SWITCH.
2. You select the Good-Goat door. Monty is forced to show the bad-goat. Here we get the Good-Goat if we STAY.
In the even we select the bad-goat door, it's not possible for Monty to then show us the bad-goat, so this possibility is eliminated by the setup of the problem. So the two possibilities listed above are the only options, they are equally likely, and we don't know which we're in. So we should not prefer to stay or switch.
Yeah, the issue is that #1 is only half as likely as #2 because half the time in #1 Monty will show the Good-Goat.
A similar puzzle that I find helps with the intuition:
Three prisoners named A, B and C are locked in the dungeon. One of the three will be pardoned by the king, the others will be executed. The jailor knows which one is to be pardoned, but hasn't told them yet.
As the jailor walks by A's cell, A stops him, and asks him for a favor. "Look," A says, "at least one of B and C is getting executed tomorrow. Maybe both, maybe just one, but definitely at least one, since there's only one pardon for the three of us. So pick one of them that's getting executed, and tell me that they are being executed."
The jailor shrugs. "Sure," he says. "C is getting executed tomorrow. Not telling you about B, or about yourself."
"Ha!" says A. "You've fallen into my trap! My chances of survival used to be 1 in 3, because only one of the three of us was getting pardoned. But now I know C is getting executed tomorrow, so one of me and B is getting pardoned. So now my odds of survival are 1 in 2!"
I have my Twitter set to Japanese so that the #TrendingStories on the sidebar show up in Japanese (which I mostly can't read) and don't capture my attention.
Ohhhh, now I feel bad for misidentifying the language (which I also can't read). I feel like I'd be too distracted by the symbols I can't read for that to work for me but my brain already does a good job of filtering out the sidebar.
An easy way to distinguish them is the hiragana: they look very different from the kanji, and should stand out at a glance. If they're there, it's Japanese. If it's all kanji, it's Chinese.
…I'd say that Korean is easy to distinguish for the exact opposite reason, it has lots of perfectly right angles (at least on the computer). I guess the circles are also part of it, but… I dunno, Hangul as a whole just looks very "blocky" to me, including the circles.
Well, I'd say that Hangul is easy to distinguish because it looks absolutely nothing like characters. That's true, but I don't think it's going to help anyone who isn't already aware of the fact.
You could use an adblock extension, as I would've assumed you do already. Checking it right now, the uBlock Origin element picker doesn't seem to have any problems picking out the trending-stories widget and blocking it.
Is this the perfect combo of know enough to manage the menu, not enough to be distracted? Optimizing only for distraction Id go with something that makes no sense at all.
You can change the location used for the trending content independently from the site-wide language, by clicking "show more" on the trending content, then opening the settings menu and setting the location there. My X is set to English, but nevertheless my trending stories are in Japanese because I have set my location to Tokyo.
Do you use ublock origin? If so, note that it can block more than just ads -- the "block element" feature (in the context menu) can nuke most things (though some especially offensive sites will try to make element filtering hard by randomizing the element ID or class name -- I don't know if twitter is one of them since I don't use it myself). Whether that's a noticeable improvement on unreadable text I don't know. I suppose it depends on whether switching twitter's language also alters elements you'd prefer to be in English.
I don't understand how the variation is meaningfully different from standard Monty Hall. It plays exactly the same, you pick a door, host reveals a goat, if you switch at this point you have 2/3 chance of getting the car, so don't switch since you don't want the car.
That was my thought. The standard solution is to switch; if switching was also the solution here, that would mean it simultaneously increased the chances of getting the car and getting the goat. Which doesn't make any sense.
The numbers happen to work out the same, but they're conceptually distinct. Considering a variant of the problem with four doors might help clarify the difference.
Standard Monty Hall: you pick one door, Monty reveals 99,998 goats, the remaining door has a car with probability 99,999/100,000 and your door has a car with probability 1/100,000. You want the car, so switch.
Also-standard Monty Hall: you pick one door, Monty reveals 99,998 uninteresting goats, the remaining door has a car with probability 99,999/100,000 and your door has one with probability 1/100,000. You want a special goat, which isn't the car, so stay.
Going into the "new" problem more formally, 99,998/100,000 of the original probability mass is ruled out by observing 99,998 boring goats. So there are two equally likely options: we picked the special goat, or we picked the car, which I'll normalize up to 1/2 at this point.
If we picked the special goat, the conditional probability of observing 99,998 boring goats is 1.
If we picked the car, the same conditional probability is 1/99,999. So the total space we're working with has a non-normalized size of 1/2 we-have-the-special-goat and 1/199,998 we-have-the-car. After normalization, 99,999/199,998 and 1/199,998 give us actual probabilities of 99,999/100,000 and 1/100,000, which is what we already knew we'd get from the standard Monty Hall solution.
If you scroll a little further down this thread, you might see that I did.
There are four doors, and three goats (and one car). You pick a door, and Monty opens ONE of the other non-car doors to open. You then decide whether to stick with the door you initially chose, or to switch to one of the closed doors.
I agree it‘s simply the same. It seems suspiciously easy, like one must have missed something, if one thinks of it only in terms of one’s own goal and never gets to the point of thinking about Monty Hall’s intentions. What seems to make it difficult for those who get to that point is an intuition that Monty Hall is your opponent and wants to deny you what you want, see the first comment above (by Hilarius Bookbinder, and Shankar Sivarajan‘s reply).
Sorry that was unclear, I referenced your comment because I thought it was an appropriate and succinct reply to Hilarius Bookbinder.
Even though apparently we‘re not on the same page —- you‘re saying in other comments that the variant Monty Hall is conceptually distinct, whereas I agree with Vim it‘s the same, except that the contestant now wants to „lose“, and therefore strangely easy (for those who know the original Monty Hall —- some of the confusion in the comments may be, not for the reason I pointed out in my first comment, but simply because people don‘t understand the original Monty Hall).
The thing is that, for the problem to work at all, Monty has to not reveal your goat: If he does, it's just the classic problem all over again (you want the car over the normal goat). That ends up making it similar to the original problem, but in reverse.
The probabilities _are_ the same, and yeah it relies on the host opening the goat. The problem before that is different, but once we're asking what happens when it opens to the "bad" goat it collapses to the same as the original.
I'll hopefully learn the lesson not to be so confidently wrong in the future!
Also, car would still be great. So there is no losing if you switch doors. You have a 50/50 chance of getting a car, the same for the special goat, and 0 percent chance of getting the bad goat (the only losing option) if you switch.
Here's a variant which might make the difference clear:
There are four doors, and three goats (and one car). You pick a door, and Monty opens ONE of the other non-car doors to open. You then decide whether to stick with the door you initially chose, or to switch to one of the closed doors.
Now you can compare the "classic" version where you want the car or this version where you want a special goat.
Honestly I think this is a semantics argument. The original Monty Hall problem and this variation "as written" collapse to the same solution (inverted).
But the general problem of wanting something different from the presenter who always opens a non-car door would have different probabilities given different doors
> Related: Some Musk supporters in the comments suggest that maybe he hires the Chinese guy to level up his account, but his accomplishments (eg speedruns) are still his own?
It's more like somebody claiming to be an expert chef, and struggling to make scrambled eggs when streaming it. Really weird thing to lie about: https://www.youtube.com/watch?v=FmEe3eUPWq4
Regarding #17, the musk video game thing. It is clear to anyone who has any experience with POE that Musk has literally NO knowledge or skill in that game. This is massively overdetermined, unarguable, and it should suggest that none of his gaming accomplishments are legit.
I'm kind of assuming this was a ploy to garner clout with the tech bro/nerd demographic (his original base, which contributed to his success as it conferred status to his employees, allowing him to retain more high-end talent). If he just got on stream and dicked around with a noob character that would have been way more endearing and authentic
On the topic of his steroid use - he also appears to have "HGH gut", i.e. his notorious distended abdomen. This is speculative but likely all things considered
More relevant context: a speedrun in POE (called a race) *starts* with a level 0 character and no resources. So leveling is part of the event. Getting someone else to level for you would still be cheating.
I think going on and just being a noob would have been much better pr .
Well, it's plausible that he's decently competent in Diablo 4, even if he hired someone to grind there as well. D4 is much simpler though, and doesn't carry nearly as much "gamer cred" as PoE, so he "branched out". But yes, this debacle made me question my perception of him in general. If he's willing to lie this blatantly about such an insignificant thing (seriously, who cares about some "hardcore" vidya leaderboard?), what else is he lying about?
The most consequential ones are persistent lies about the capabilities of Tesla. These include claims that Tesla full self driving already was safer than a human... which were claimed about a decade ago, with FSD announced as being just a year away almost every year for the last decade (including 2024). Claims that Tesla will operate a fleet of robo-taxis, or that Tesla owners can make passive income by operating their cars as robot taxis, fold into this. We may also remark the 2nd gen Tesla Roadster, for which he took likely $250+ million in preorders for starting in 2017 and has yet to see release, which is supposedly taking so long because he's sticking rocket boosters onto it.
Last October, the Tesla We Robot event worked very hard to look as though the robots deployed were autonomous and speaking directly to guests, when they were actually being teleoperated and spoken through by Tesla employees. Both the present capabilities and expected development time of the Optimus are constantly stated to be far in excess of anything observed. I wrote about this event a while ago, and Rodney Brooks has commented on both: https://rodneybrooks.com/predictions-scorecard-2025-january-01/
Given that Tesla is universally acknowledged to be massively overvalued relative to their actual production or returns to investors (dividends to date: $0), its valuation is explained by its promises of enormous future profit from the technology it has developed, not from its actual production of cars or anything else. This is an impression upheld by constant dishonesty to the tune of hundreds of billions of dollars. At first it was going to be a revolution in battery technology, then it was self-driving, now it's household robots, there's always something new on the horizon to take attention off the last promise.
Then there's claims that SpaceX will take humans to Mars in 2029 and that there will be a million-person city on Mars by 2050 (note that in 2011, humans landing on Mars was to be expected in 2021). And then there was the Hyperloop, hype for which has finally died down after going precisely nowhere.
I'm also under the impression that Musk has lied a good deal about his own biographical details, including his matriculation into Stanford and the idea that he spent a year traveling around Canada making a living by doing odd jobs and lumberjacking. There's lots of claims that he's lied about way, way more stuff, but I haven't looked more closely into those and will leave it at that.
Basically, he lies about a lot of things, and it didn't start recently. A lot of these lies have gone without enough critical attention by a combination of money (lots of people are heavily invested in Tesla and need the stock to keep climbing) and friendly media attention (he's a larger-than-life figure who consistently generates headlines by announcing incredible predictions and plans for technology). It's only lately, now that he's under way more public scrutiny and genuinely does seem to be more unstable than before (cf the recent, unfortunately paywalled, Sam Harris piece), that more people are realizing how often he does this.
> And then there was the Hyperloop, hype for which has finally died down after going precisely nowhere.
IIRC, even Musk himself admitted that the Hyperloop was never a serious proposal and was just a cynical ploy to try to kill CAHSR. Which at least looks a lot better for him than thinking that Hyperloop could have ever possibly worked would.
It is obvious to anyone playing either D4 or PoE that Musk is not familiar with ARPG on the most deepest, fundamental level that anyone who spend even a few hours in either would not be. (Not knowing how loot system works, how items are picked up, etc)
This level of falsehood should call _all_ his "accomplishments" into question. How likely would one would decide to adopt this level of falsehood on a whim one day?
But did he get more credit than he deserved, by getting some that others should have had? Possibly. The anecdote suggests that he would have few qualms with it.
(Should we infer that he did not deserve his eleven-figure pay package by Tesla, or that other people – or the company itself – should have had some of it? I genuinely don’t know.)
As Adrian said, Musk did not originally fund Tesla, but he very much was a founder of SpaceX.
More to the point: we have extensive third party documentation of Musk’s heavy involvement in SpaceX.
The man is a chronic fabulist, no doubt, but it’s a mistake to not give him credit for a sizable fraction of the astonishing accomplishments of SpaceX.
Apparently this is correct. I'm not sure where I previously saw this was not the case. I was under the impression he had only stepped in after the intial demo. At a minimum then, he has a lot of money and can convince competent people to work for him and build successful things. This is not nothing.
This specific incident still casts heavy doubt:
A rich guy buys the expertise of a game player or players who make a very strong build. He claims to have done all the work. Obvious to everyone he contributes nothing but money for time. What do you credit him with here?
A rich buy buys the expertise of rocket scientists and engineers to make a very strong rocket. We have nothing but anecdotes (many his own) that he contributes more than money for time. What do you credit him with here?
> The fact that Elon is as terrible at POE2 as me is strangely reassuring and makes me feel more kindly towards him 😁
Do you also relate to going on a podcast tour and bragging about your gaming accomplishments, and on the stream in question say things like, "My only complaint about Path of Exile 2 is that it's too easy."
As said over and over in all the comments: it *would* have been relatable if Elon musk had simply streamed himself dying over and over to the Act 2 campaign boss. The problem wasn't that he was bad. The problem was that he is insufferably smug about how good he is at something while so blatantly lying about it.
Since we're now living in Topsy-Turvy World, I've burned through a lot of my outrage stocks and now reserve the rest for really important and egregious stuff.
Musk lying or enhancing his game prowess is not one of those things. It's so silly I have to laugh. Yeah, if he's cheating, ban his ass (but GGG has its own problems what with the Tencent buy-out and people being pissed off they went on holidays over the Christmas and didn't address issues with the early access etc.)
In unrelated news, why do the Democrats keep shooting themselves in the foot and making me defend, or at least stand on the side of, guys I generally range in feelings towards from "eh, he's an idiot but who cares?" to "I would be very happy if they were fired into the sun".
I don't like Sam Altman! I disapprove of Sam Altman! I would be very happy if Sam Altman, Big Tech and the rest were investigated! But you're not going to do it by resurrecting McCarthyism, and I am forced through gritted teeth to agree: did you send out the scolding letters to Big Tech donors to the Harris campaign, Lizzie? People have the right to donate to the political party of their choice, and it's not enforceable to have senators doing what looks damn like "you can only donate to *us*, not them".
"Are you now, or were you ever, a donor to the Republican party?" is not a good look, Lizzie, see what I said about McCarthyism and a senator getting ready to set up their own new House Un-American Activities Committee:
"These donations raise questions about corruption and the influence of corporate money on the Trump administration, and Congress and the public deserve answers. Therefore, we ask that you provide responses to the following questions by January 31st, 2025:
1. When and under what circumstances did your company decide to make these contributions to the Trump inaugural fund?
2. What is your rationale for these contributions?
3. Which individuals within the company chose to make these donations?
4. Was the Board informed of these plans, and if so, did they provide affirmative consent to do so? Did your company inform shareholders of plans to make these donations?
5. Did officials with the company have any communications about these donations with members of the Trump Transition team or other associates of President Trump? If so, please list all such communications, including the time of the conversation, the participants, and the nature of any communication."
I await with much goddamn interest the revelation that Lizzie made the same requests of any companies that donated to the Biden inaugural fund. If she has reason to think Altman violated campaign finance guidelines, then go after him in SUNY (that does seem to be the court of choice for such, does it not?)
Otherwise, it's none of her business (she's senator for Massachusetts, Bennet is for Colorado, and Altman is living in California so she's not his representative and he's not her constituent) and she does not get to tell anyone "you are only allowed to donate to people I approve of". She's on the Banking Committee, which I don't think covers election or inaugural fund donations (correct me if I'm wrong) so this is just a piece of busy-body snooping which has no legal force. Any lawyers/political experts out there tell me more and if she can indeed compel him to tell her anything other than "none of your business, talk to my attorney".
I don't know POE but I can confirm his "double shield wielder" Elden Ring build he showcased on 23 May 2022 (fat roll mage with none of the typical stuff you'd see to compensate for heavy armor, estus flask not bound to quick use) has never been good in any version of the game.
At this point it's happened multiple times so I don't think we can say "oh it's just him being naïve about what his smurfs are grinding for him," he is obviously making some point we're not getting. You don't walk back to the same sort of petard that just hoisted you two years ago.
I played PoE 1 for about 9 years, I finished all the challenges in a bunch of leagues, I've been playing PoE 2 since the launch in December. Raj is 100% correct here and it's obvious to me from the video that Musk has no idea what he's doing in it.
It's plausible he picked up how to play D4 well enough to buy an account and actually use it, but the PoE video betrays misunderstandings of fundamental mechanics that anyone who has finished the campaign would understand decently. It's impossible to think he reached the very endgame of Path 2 (in Hardcore, no less!) while thinking the item level of his weapons was the important part, for instance.
Happy to answer questions about this (or the game in general), it's something I know a lot about. Not sure if Musk has just gone nuts wanting everyone to think he's the best at everything, or if he really has been like this all along.
What is your opinion of PoE2 in general? The comments I'm seeing are veering between "this is terrible" and "what are all you scrubs whining about, git gud, I breezed through it" which is not terribly helpful.
The mechanics are different from PoE1 and I had to unlearn a lot of habits I picked up there. So far I hate it and I love it - when you do kill the act boss it really does feel like an achievement but my God you have to grind to get there.
Note: I am not a gamer of any description or by any means. I never spent my childhood/teens playing games. I tootled around a bit with Torchlight before trying PoE because everyone on my dash was talking about it. I have no idea of the mechanics - when the discussion starts with "look for an item with this suffix, then you can get 4% extra DPS by rolling an exalted on top of the base damage but not if the crits" my eyes glaze over. (God bless you, Pohx Kappa, for build guides; Righteous Fire was an absolute *revelation* to me: "Whee! I can just run through mobs and kill them without lifting a finger! I can just stand here and let them fling themselves at me and they die!") I don't engage with the trade mechanic so I have to pick my gear up off the ground like a savage (that was extremely funny to me, Elon manually picking up and transferring to the inventory) or hope some vendor in some town finally this time has the piece I need and that I have the currency to buy it.
But just maybe he *has* been playing like that all along, it's a habit he picked up when he started and he never changed? Yeah, he's most likely lying and cheating, but it's not absolutely impossible that he just plays badly, but does genuinely play (not advanced to where he claims to be, of course; that probably is someone farming for him). Because I'm not A Gamer, it's not that important to me, it just makes me laugh. But if you take games seriously, I do understand why this is A Mortal Sin worthy of burning at the stake.
What a surprise that most of the comments to date are about the MH variation.
However, I don't think it's an interesting variation at all - it's just collapses to the original problem!
In the original problem, you should switch, because that gives a 2/3 chance of getting the car.
Here, again, if you switch (to, wlog, slot C), you have a 2/3 chance of getting the car, and if you don't switch, you have a 1/3 chance of getting the car. (None of the modifications to the setup change that, MH is still definitely-not-revealing the car).
Since the other remaining option is *good goat*, if you switch, you have a 1/3 chance of getting good goat, and if you don't, 2/3.
So now you shouldn't switch, but there's nothing interesting here beyond the original problem.
I agree - there are two non-prizes and one prize. What's different is that you've changed the formulation from two indistinguishable non-prizes to distinguishable non-prizes.
Yep, the fact that people find a trivial variation interesting even here is the most interesting aspect of this situation. Just goes to show how unintuitive probabilities are for humans.
I agree, except everyone's forgetting Murphy's Law here. Were I in either Monty Haul problem, I would somehow always end up with the least desirable prize.
If you have a 50% chance of winning, then you have a 75% chance of losing.
Ozy Brennan's linkpost had one which I thought might be interesting to the crowd here. Maybe claims of declining testosterone levels are just an artifact of a change in how we measured it: https://eryney.substack.com/p/maybe-its-just-your-testosterone
I didn't work very much in BIG big buildings that weren't government or industrial so grain of salt, and I'm also just kinda restating the dude with the expensive degrees' case from the shovel swinging level.
A good chunk to most of the expense of a modern skyscraper happens First:
before the first structural pillar reaches above grade and and Second:
when you have to do all that systems bullshit; wiring and plumbing and hvac and networking and and and.
Those steps are absolutely mandatory and people have kinda honed in on the cheapest way to build buildings that reliably don't fall down, which involves basically building a big cage of columns and struts and beams and ties, then building the rest of the building around it.
So the only place left to cut costs is in the category of "things that are nice for people inside the building" and "things that are nice for people outside the building"
Those costs can go as high as you want them to go, and given I've never seen a big building get more than a lick of paint and some precast concrete panels as decorations but I have seen some quite nice interiors, so if there is any fat left in the budget it's going into nice lighting and fast elevators not beauty for the enjoyment and edification of the hoi polloi.
First, comparing private fund IRR to public index IRR is dumb and that's why no one in finance actually does that. They'd calculate e.g. a market-adjusted version of IRR called direct alpha instead which keeps cash flow comparisons in order.
Second, a16z have a bunch of dedicated crypto funds and that's probably where their crypto performance is concentrated, rather than in their flagship funds.
Third, their flagship funds constitute less than 10% of their cumulative fund sizes over that period, so using these as indicative of a16z's overall performance is misleading.
Fourth, venture capital in the U.S. has underperformed the market in general in the last ~25 years so this shouldn't be huge news.
IRR has some flaws as a tool for comparing returns:
Multiple IRR values:
When a project has cash flows that change sign (from negative to positive and back again), it can result in multiple IRR values, making interpretation difficult.
I touched on that in my comment, but basically they still invest in it because of a myriad of reasons:
- VC offers diversification benefits, since most wealthy people have a huge portion of their capital tied up in "the market" already and are seeking a broader, more comprehensive allocation (which could be for example 10% of their investible funds in VC, 10% in real estate, 20% in bonds, 20% at a large hedge fund, 40% in the S&P)
- it is more tech-weighted than the market and some investors care more about sector allocation than broad market returns
- it offers a chance at much higher returns if luck + successful investing go hand in hand (a16z isn't the only VC fund out there, and to an individual investor it's a fallacy to say "VC offers lower returns than the market" because while you can invest in the whole market, you can't invest in VC as a whole - you have to pick a fund, a fund manager etc., which creates the possibility that your VC fund manager will outperform the market or any other benchmark, relevant or not)
- - this point also works to explain to people who believe hedge funds are trash because they have lower returns than the market on average: 1. you're never investing in the average hedge fund, and that's becoming truer every year as consolidation towards the large pod shops continues, 2. hedge fund strategies typically pursue higher risk-adjusted returns (their Sharpe ratio and other related ratios), so when you invest in a hedge fund you're not interested in getting 7% average return for 16% average volatility, but maybe 5% return for 4% volatility), 3. there's a wide range of hedge fund strategies that have different risk/return characteristics, i.e. you're not expecting the same performance out of a long-short-equity fund and a short-volatility fund. The same applies to VC: you have generalist VC funds who invest in a wide range of tech sub-sectors, and you have specialist ones that are more focused on fintech, crypto, payments, AI, datacenter infra etc.
- it offers a seat at the table for large investors who want to grow their corporate access/network (this is one of many non-return incentives)
- it's part of the investment mandate of a large asset manager or alternative investments company
- it's a bet on the future growth of VC-related fields (like AI, fintech and other subcategories that have definitely grown faster than the market)
- personal preferences regarding the exciting nature of investing in a high swing, low win rate type strategy (i.e. if you're the guy who invested in the fund that financed Facebook, that gives you bragging rights and potentially a future career out of it, if you're the guy who lost his money or under-performed the market, you don't have to advertise your failure and you probably had most of your money in the market/at a large wealth management firm anyway)
Can you explain why you would care more about sector allocation than broad market returns?
When you say there is an element of potentially lucking into much higher returns, would that look like a particular fund (say, the 2011 A16Z fund) being massively positive one year thanks to a single extremely successful startup? If not, what would it look like to a person who bought a specific VC fund?
In what sense are you never investing in the average hedge fund? I understand you have to invest in some specific hedge fund, but shouldn't the average person (who doesn't have some special access letting them invest in the best hedge fund) assume that whatever hedge fund they invest in will have average performance?
Can you explain what it means to offer a seat at the table for large investors who want to grow corporate access? How does this eventually result in them getting good things?
What does it mean to say both that VC-related fields have grown faster than the market, at the same time that VCs have underperformed the market?
1. You could care more about sector allocation than broad market returns if you have a bias towards a sector ("I believe tech has a brighter future than a bunch of tech + a bunch of consumer cyclical + consumer defensive + financials + healthcare + energy + real estate + utilities + communications + industrials put together"), if you have an informed opinion and/or material non-public information that supports investing in one sector over the broader market, if you have ESG/green energy/anti-military-industrial-complex views and you want to make sure your investment is not solely focused on return but also on social impact, if the investment research from your equity/fixed income/quant analysts identifies more profitable opportunities in a sector relative to others, if you're worried about the impact of certain presidential executive orders on certain industries relative to others, etc. Basically anything from "I know nothing but I have a bias" to "I am better informed than almost anyone else in the world on this topic" can justify caring more about sector allocation than broad market returns.
2. It could be from a single extremely successful startup, or it could be a range of a few successful startups that went on to be valued higher in subsequent funding rounds, increasing the unrealized gains and portfolio value of the fund. Most VC funds have a 8-10 year investing horizon, so LPs commit for a very long time by typical investment standards. One element of lucking into much higher returns comes from the many swings, few hits nature of VC investments. Simplification ahead: Imagine you have $100m to deploy, and your mandate says you need at least 20 investments for diversification purposes. Let's say 5% of companies that are investable into in the VC landscape have a >10x potential, and 95% will go to zero or won't be worth the trouble. On your 20 investments, that'd be 1 investment that is expected to >10x. The luck factor is in the spread of ">10x". Say the distribution goes like this: 1% of companies will turn into 20x, and 0.05% will turn into 100x. If within your 20 investments, the one that goes >10x goes 10x, that's great, but the eventual failure of the remaining 19 probably won't make you a star manager. If your >10x company goes >100x, how much of that delta came from your genius analysis of the company and how much of it came from pure luck? Since the entire business model of VC is predicated on getting a few calls very right, the luck factor can expand your success further than your raw analytical skill can. This model is a gross simplification of the process; many funds don't just invest but also use their own network of experts and portfolio companies to create synergies and help their companies expand and reach the next funding stage.
Another element of the "luck" aspect is that certain periods in time offer much better investment opportunities than others. The first graph on this page shows the average VC fund return by launch year: https://emaggiori.com/venture-capital-returns/ You can see that VC funds that started in the 1990s had the best opportunity set for investments, and then the next best moment to start a VC fund was around 2007-2011, when tech started its 15-year strong bull run. You could be very talented, open your fund in 2021, and be shit out of luck because VC valuations were sky-high. You could be very talented, open your fund in 2010, and have a statistically much better opportunity set for your investments. In both cases, the talent is very much real, but the luck part is fickle.
3. The average person cannot be a hedge fund investor as they have to be an accredited investor. I guess by the law of averages, the average hedge fund investor does invest in the average hedge fund, so in that sense I am statistically wrong. What I wanted to point out is that even if you don't have access to the very best hedge funds, if you're an accredited investor you can still contact many hedge fund managers and get your money in the door. In that sense, even the slightest amount of research done on your own time (for example looking at average risk/return profiles for different strategies, and then looking at the track record of the fund managers you're talking to) should turn you from "a chill guy who invests kinda randomly into the US public equity markets" (someone who buys a broad market index) to "a guy who invests in a specific HF strategy with specific risk/return details, with a specific very real fund manager guy with a real track record", which to me involves many more layers of due diligence than investing in the market. Say I invest in a merger arbitrage hedge fund, I should know that this strategy requires leverage, involves left tail risk, and I should probably ask the fund manager if they're more interested in cross-border & vertical mergers or in friendlier, domestic mergers because that also has an impact on risk/return expectations. It's difficult for me to accept the saying that "the average hedge fund investor [...]" because it hides a whole lot more diversity than "the average equity market investor", basically.
4. Individual LPs might want to create a close relationship with a powerful venture capitalist, because some VC people can be helpful in raising capital for their own projects or companies. Corporate venture capital, CVC, involves nurturing niche goods & services ideas that the company doesn't want to devote in-house R&D dollars into. As you can imagine, CVC decision making involves different goals than your typical VC fund. The CEO and board of a company that has a CVC arm might want to dump money into it for competitive reasons rather than focusing on returns.
5. Some VC-related fields have grown faster than the market, but at the same VCs on the whole have underperformed the market. You can see here https://cepres.com/insights/financial-services-sector-shows-outperformance-in-vc-deals-through-2019 that financial services and biotech VC IRRs were much higher than the market return across the 2003-2019 period they observed. VCs on the whole have underperformed the market because the money-weighted IRR across all sectors have turned out to be lower than the market return.
The only way comparing private IRR to public IRR makes sense is if the cash flow dates and outflow amounts are identical in both scenarios, since the timing is critical for the IRR calculation. Private funds make very irregular cash flows, and so the only way you can make an apples-to-apples comparison is if you do a counterfactual using private fund cash flows with concurrent public index returns (this is essentially what the direct alpha metric does via discounting). The graphic on Twitter says it uses Cambridge Associates IRR, but even CA says "Due to the fundamental differences between the two calculations, direct comparison of IRRs to AACRs is not recommended" (https://www.cambridgeassociates.com/wp-content/uploads/2018/07/WEB-2018-Q1-USPE-Benchmark-Book.pdf).
To that end, thinking of the IRR of an investment in a public index is weird to begin with. The whole point of IRR is to account for the timing of outflows and inflows, so context makes no sense with respect to investing in a public index; it's just an ordinary total rate of return. But again, that makes it an inappropriate comparison. Incidentally, I can't even find Cambridge Associates reporting a net IRR for a public market index anywhere, which highlights how odd this comparison is.
IRR has other problems like the multiple values thing Gordon mentions. It also makes an implicit assumption that all distributions are re-invested at the IRR rate, which amplifies (i.e. exaggerates) the sign of any IRR away from zero. LPs' uncalled capital often is sitting in something like the S&P500 before it's called into private fund, and likewise distributions are often re-invested into something like the S&P. There's a "modified IRR" metric that tries to account for this, but raw IRR certainly does not.
The interpretation of that spreadsheet is also weird. 5 of the first 7 a16z funds do have a higher net IRR than their S&P500 IRR. The ones that don't are very young funds -- this person's data only goes to 2018 and the ones with near zero IRR are still in downside of the J-curve.
I was also too flippant in my remark about US VC underperforming. If you look at the vintages of the a16z funds shown and calculate pooled direct alpha using S&P500 as the benchmark, you'll get a pooled direct alpha of ~5% (meaning public returns with the private market cash flows would come out behind). If you go all the way back to 2000 through today, you'll get a direct alpha of about 0.4%, i.e. basically no difference. (When adjusting for risk however, it is quite possible that the VC alpha would be zero or negative; I don't have the time to crunch through that however.) This is using Preqin data, by the way. Unfortunately they don't have cash flows for a16z funds so I can't look at them specifically.
I guess the bigger point I'm making here isn't that the claim is wrong. It just doesn't look like anything I'd expect from someone who knew what they were doing.
> I do worry that even if you officially say “pay on results”, therapy results are naturally fuzzy and hard to assess, and it’s too aggressive to refuse to pay your life coach who’s put dozens of hours of work into your case, so most people will say “yeah, I guess that kind of worked in a sense” and pay the money (this works even better if your clients are “lifelong pushovers”). How would one design a version of this system which avoided this failure mode?
Yeah we attempt to solve this by making the dollar amounts rather large. Generally people aren't going to pay 4–6 figures because of "yeah, I guess that kind of worked". More on that here: chrislakin.blog/p/the-case-for-pay-on-results
Another approach might be to define hard endpoints for success - something that can't be 'faked'. For example, "I always wanted to go skydiving, but anxiety makes this goal seem impossible". Or for someone in crisis, have them make a journal, where they go from a baseline of wanting to commit suicide every day to going a fortnight without any thoughts of suicide.
Concrete endpoints can be subjective. I understand that goals change, and perhaps it would help to measure your outcomes if you didn't tie it to pay.
Part of my job is to design clinical trials, and often these have subjective endpoints we have to track. Hard, prespecified endpoints are important, but sometimes these endpoints are things like, "lower back pain", which no outside observer can quantify.
If I want to determine whether to use your services, telling me "these people paid for these services, given a policy of 'pay when you're satisfied'" is useful information, but not a direct answer to my question. It's not a bad endpoint, but it's still a surrogate endpoint. If I'm thinking of engaging your services, I don't want to know if I will ultimately pay for those services within the policy parameters - even permissive parameters such as those. If I'm depressed, I didn't wake up, thinking, "I hope I get to pay for CL's services", I woke up thinking, "I hope CL's services help me not feel so low."
On the flip side, you're hoping that people's lives change, I'm sure. But you're also hoping their lives change to the point where they're willing to pay you for your services in helping that to happen. So for you the hard endpoint of success is, "felt services were worth paying for". These two outcomes seem like they're probably closely aligned, and I'm not claiming otherwise, but since the overlap isn't perfect there's room for interpretation error.
23. The poll is 1.5 years old for a 5 year prediction and very ambiguously worded. I don’t think it’s at all useful for anything but what people were maybe thinking a year and a half ago.
In case anyone else was wondering, that really fertile province in southeastern Turkey is Sanliurfa. It has both an unusually religious local population and a large population of Syrian refugees.
No Amish, Haredim, or Elon Musk, which were my first three guesses.
Am I crazy, or is the Monty Hall variation exactly the same as the original with a different choice due to different goals?
There’s still the exact same chance the car (the original goal) is behind the unopened, unchosen door: 2/3. The only difference in this scenario is that you don’t want the car.
It's not. It's different because Monty COULD have opened the door with the prize you really want (the special goat), because he thinks you want the car. That he didn't is relevant information to factor in.
I don't think so. The scenario as given is that he showed you the normal goat. The only actual question is "Do you switch?" And the probabilities that inform the decision are the exact same probabilities as in the original: 2/3 chance of the completely untouched door hiding a car. The only reason we make a different decision here than the original is because we don't want the car—we want the thing that has the 1/3 chance of being behind the untouched door (and therefore 2/3 chance of being behind your currently chosen door).
It's "Do you switch, given the information presented?" If you change the probabilities of opening certain doors given the location of each prize, you change the information you get from him opening them.
The scenario is that you're shown the normie goat. He cannot show you the special goat. That would be a different scenario. Therefore the information presented is exactly the same as the original, where the 2/3 chance of the car being behind one of the two doors you didn't choose collapses onto the completely untouched door. The only difference between this and the original is that you want the non-car hidden object, so you don't switch.
Suppose there is a coin where you're 50% sure it's fair and 50% sure it's double sided and always lands heads. You flip it 10 times. It lands heads each time. It couldn't have landed tails, because that would be a different scenario, so you conclude that it's still 50:50.
Yeah, I was completely wrong about that—the fungibility of the goats from the perspective of the host is a)relevant, and b) paramount for comparison to the original problem. Everything else is still true.
The numbers happen to work out the same, but they're conceptually distinct. Considering a variant of the problem with four doors might help clarify the difference.
Conceptual distinctiveness isn't relevant, just the probabilities. Probabilities aren't inherent to object or even object states, just our knowledge of those objects/states, so there's nothing gained by for some reason talking about them as if they're nonfungible.
Have you seen the classic joke of simplifying 16/64 by canceling the "6" in both numerator and denominator? That's basically what you're doing here. As I said, if you have four doors instead of three (three goats, and Monty opens one of the non-car doors remaining after you make your choice), your approach would get you the wrong probabilities.
I saw your elaboration on your 4-door version higher up, and the reason it doesn't map is because you still have Monty revealing only 1 door—in order to exemplify the underlying probabilities with these more-door versions, the crux is that you always have to be left with just 2 doors: the one you originally chose, and one completely untouched door.
It's interesting because in the "Monty Fall" version, where Monty doesn't know where the car is and just happens to open the door to the goat by accident, switching doesn't affect your odds.
So you might think that, since Monty doesn't know where the special goat is, him just happening to reveal the normal goat by accident wouldn't affect the odds. But Monty does know where the car is, and avoiding the car improves your odds of getting the goat, so it does matter.
(But yes, the odds end up working out the same - you have a 2/3 chance of getting the goat if you stay and a 1/3 if you switch.)
Yeah. The odds are still the same precisely *because* the same objects (the goats, collectively) are still fungible from the host's perspective, as in the original. And he's still avoiding showing you the car. Basically everything is the same because the host's knowledge and intentions are still the same, and that's really the only way information is injected into the system aside from the reveal, which is also essentially the same.
33. Therapists could work on commission like real estate agents: define a goal and agree on a price. Possibly pair with prediction markets to i) determine if the goal is met and ii) let the therapist sell action so they don't have to wait until the goal is achieved.
I thought #47, the Jensen Huang story sounded especially batshit because he (the CEO of Nvidia) is cousins with the CEO of AMD and was it the same aunt and uncle and where did they send her?
PSA: But apparently this is a common misconception. They're not related. Even the author of the book Chip Wars gets this wrong.
Re 20: I harbor no positive nor negative feelings for a16z's founder, although I have had to unfollow him on X because his constant politicized posting was ruining my timeline. However, institutional investors don't look at the S&P 500 as a benchmark for every strategy. The risk and return characteristics of venture capital funds makes them incomparable to the S&P 500:
1. The business model of venture capital is to have a few investments outperform by a very large margin, and most investments are expected to have negative IRR. The companies in the S&P 500 aren't expected to mostly go bankrupt in the next 10 years, in fact they're mostly expected to continue growing their bottom line at around 5-7% per year pretty much indefinitely. The fact that recent stock market performance was driven by a small number of tech stocks does not make the market as a whole behave like a venture capital fund.
2. Once a fund company reaches critical mass, which hinges on luck a lot more than most people realize (even more so for venture capital), its return is typically expected to go down as the opportunity set of high-growth companies to invest in gets smaller. You can "easily" deploy $100m into a ton of seemingly high quality startups, but the job gets multiples harder when you raise $1B for your next fund and you have to deploy most of it quickly (this falls into one of the many inefficiencies in capital allocation at large funds). The S&P 500 however doesn't suffer from its own size in the same way, its performance hinging more on the macroeconomics of the markets the companies operate in and the quality of management's execution of corporate strategy (which is to "maximize EPS" at the CEO level, since most incentive structures emphasize those sorts of metrics). Here again, comparing the returns between VC and the largest US companies is a mistake.
3. Another reality check more than a reason to defend a16z performance is that the types of large investors who put money into subsequent funds are likely already heavily invested in the US stock market and may want to get involved in venture capital for reasons that have absolutely nothing to do with "I want this investment to beat the stock market!". Typical reasons include: seeking diversification away from large caps and into small caps, embracing more risk for the hope of higher returns with the acceptance that higher returns aren't guaranteed by higher risk, creating powerful friends and expanding corporate access/network, making a more or less informed bet on certain technologies that VC fund x might be more inclined towards than VC fund y, etc.
4. Similarly to Private Equity funds, VC funds typically own a large amount of stock in private companies. This means the typical VC portfolio can "withstand" a prolonged drawdown in public equity markets by having its businesses be valuated more rarely (for example, only at the time of exits at a new valuation). LPs who invest in VC know this, and presumably they go along for the ride willingly. Keep in mind this is an entirely behavioral advantage; even if equity market values get "updated" 5 times a week and your typical VC company is valued maybe once a year or two, the underlying value of each company fluctuates by however much new investors are willing to meet selling investors, which can theoretically happen at any time so in that sense VC and PE companies aren't that much safer investments than public equity.
The better case to make against an investment in a16z funds since 2010 would be to go to each individual LP, ask them for their rationale for investing in the first place, and deconstructing the behavioral biases and co-mingling that led to this point. I can't do that, but I suspect if you do you may find that most investors were acting rather rationally based on their risk and return objectives, and were actively choosing VC investing knowingly giving up on the relative safety of the broader stock market. Could you have put all your money into an ETF that tracked the stock market for 0.2% fees per year? Sure! But filthy rich people and cash rich corporations don't put all their money into one basket, and they don't pursue the same goals with their investments.
I commend shachaf in #2 for including a necessarily detail that is often left out of incautious retellings of the Monty Hall problem: that Monty knows which door has the car. Unfortunately, they still left out another necessary detail, which is that Monty is REQUIRED to open a non-car door and give you a chance to switch. The standard solution to the original problem only goes through if this is stipulated. (If Monty is allowed to see what door you picked before deciding whether or not to open a door and let you switch, then for all you know, Monty might do it ONLY to the people who originally picked the car in order to psych them out, and force everyone else to stay with their incorrect first pick.)
But if we assume this works like the ordinary Monty Hall problem except for the special goat, then keeping your original pick has a 2/3 chance of getting the good goat, and switching has a 1/3 chance, so you should stay.
The easy way to see this is to note that, per the standard problem, switching should give a 2/3 chance of a car, so getting the goat has to be the inverse of that.
If you don't trust the easy way, you can break the problem down into 3 scenarios:
A. Your original pick is the good goat
B. Your original pick is the bad goat
C. Your original pick is the car
Initially, each of these has equal odds (by symmetry).
In scenario A, Monty HAS to reveal the bad goat (it's the only door that isn't the car and isn't your initial pick). So the fact that Monty DID reveal this does not change this scenario; it was inevitable.
In scenario B, Monty has to reveal the good goat. This didn't happen. Therefore we can't be in scenario B; its odds are reduced to zero.
In scenario C, Monty has a 50/50 chance to reveal either the good goat or the bad goat. This means this scenario had a 50% chance of being falsified, if we were in it. Thus the weight on this scenario is halved; i.e. half of the possible worlds in class C "died" when Monty revealed the bad goat, rather than the good goat.
1 chance of scenario A, plus 0 chance of scenario B, plus half chance of scenario C, combines to give us 2:0:1 odds, i.e. 2/3 chance scenario A, 1/3 chance scenario C. (Same answer as the "easy way" above.)
I believe your claim that “ride-sharing is a natural monopoly’ is completely wrong. I have been publishing analysis Uber since 2016 and have never seen any objective analysis demonstrating the type of powerful scale economies that natural monopolies need. Can you produce any evidence of this?
Urban car services existed for a century without any tendencies to high concentration, much less monopoly. Uber never had any Facebook-like Metcalf’s law type network effects where users highly valued the fact that other people used the app. Uber’s astronomical growth rate in its first decade produced $32 billion in losses. Just as there is no evidence of major scale economies, there is no objective evidence that Uber is more efficient than traditional taxis.
Uber was a purely predatory company. It used anti-competitive subsidies and its ability to sustain those $32 billion in losses to drive lower cost, more efficient competitors out of business. It only achieved breakeven after 15 years because (post-pandemic) it drove fares much higher and driver compensation much lower than they had been before Uber began operating. Those billions in subsidies—and Uber’s demonstrated ruthless behavior—destroyed any possibility that new market entry could discipline Uber’s ability to raise fares and impoverish drivers at will. Two journal articles that document why Uber’s economics meant that it could never operate profitably in competitive markets.
Will the Growth of Uber Increase Economic Welfare? 44 Transp. L.J., 33-105 (2017)
Uber's Path of Destruction, American Affairs, vol.3 no. 2, Summer 2019
"Uber was a purely predatory company. It used anti-competitive subsidies and its ability to sustain those $32 billion in losses to drive lower cost, more efficient competitors out of business."
I don't think this makes sense outside the context of natural monopoly. If you constantly need to burn money to put competitors out of business, and anyone can easily enter at any time, you'll always be burning money. My impression was that the goal was to burn money, put competitors out of business long enough to exploit a natural monopoly, and then raise rates. With the natural monopoly coming from the fact that the average rider wants to go with a big network because it will have the most drivers (and therefore shortest wait), and the average driver wants to go with a big network because it will have the most riders (and therefore most money).
1. You failed to reply to my request for independent analysis showing Uber was a "natural monopoly". You just repeated the assertion that it was. Would be happy to wager a considerable sum that you can't find any.
2. "Natural monopolies" have well understood economics, such as enormous scale economies (e.g. a high % of fixed costs so that the marginal cost of growth is very low). Uber had none of these features.
3. The claim that riders want to go with the network that has had the most drivers/shortest wait ignores the actual economics here. Riders wanted the network with the lowest fares which was the result if $32 billion in subsidies. Uber had more more drivers because of those same subsidies. If riders had to pay for the actual cost of their rides they would not have chosen Uber. No transportation has the Metcalf law subsidies you are claiming here. People don't fly Southwest or United because lots of other people do, they only care about the price and service offered to them.
4. Likewise drivers want the best compensation and conditions. They don't care about the size of the network. There's no natural correlation between the size of a transport company and the level of wages offered. Uber has driven (the already awful) driver compensation to below minimum wage levels in many places. Uber used anti-competitive power to destroy the normal workings of the driver market
5. Claiming that "golly its not rational for to invest in companies that don't have powerful competitive economics" suggests you haven't noticed the powerful forces that not only don't care whether "market competition" is maximizing overall economic welfare, but are fighting fiercely to undermine the few forces protecting "market competition"
Once Uber spends all that money on subisidies, can't a new competitor afterward enter the market and force them to spend more? It seems like Uber should never be able to get profitable enough to pay off those subsidies.
Uber's hyper-aggressive growth in its first decade was not designed to exploit scale economies and achieve lower unit costs than competitors (as Scott incorrectly argued). It was predatory behavior designed to drive more efficient competitors out of business and ruthlessly convince everyone that its domination was inevitable and future competitive (or legal or political or journalistic) challenges would be hopeless. Uber achieved enormous anti-competitive market power because it had $13 billion in investor funding by 2015. This was 2300X more than Amazon's pre-IPO funding. Amazon could fund most early growth out of positive cash flow because it had strong, legitimate efficiencies. Uber had no legitimate efficiencies and lost $32 billion. It only reached breakeven post-pandemic when it dramatically raised prices and cut back service. No new entry of any significance occurred because everyone knew that Uber would ruthlessly retaliate and everyone knew that no one in government would lift a finger to stop predatoty behavior designed to protect its artificial anti-competitive market power
But Uber has retreated from some markets? And in every city I've ever lived in, there has been at least 1 Uber competitor?
Athough, I think your point of "Urban car services existed for a century without any tendencies to high concentration, much less monopoly" is a very good one.
3. Actually, some people do fly with some airlines because they fly more routes and thus offer better connections. This is not the same as flying more passengers but it is strongly correlated to it (as having a large passenger market share helps serve many less-used routes with reasonably large planes.) Other people flying high usage direct routes on flexible tickets prefer airlines that have more departures per day as this makes flexibility much more valuable. This is also correlated to having more passengers.
The big difference between modern ride hail and traditional urban car services is that modern ride hail can connect a rider on one network to any car on that network, whether visible or not, while traditional urban taxi hailing either connected a rider (who didn’t have a network) to any visible car (regardless of network) or connected a rider to a car through pre-scheduling a ride. The modern ride hail has much more value to network effect than the traditional urban taxi services, so it has more claim to natural monopoly. (Though if drivers can all have two phones, one on Uber and one on Lyft, then the network effect disappears.)
You have absolutely no evidence demonstrating that this "network effect" had any huge impact, much less the $100 billion impact that Uber's investors were pursuing.
You have absolutely no evidence showing that any other transport service realized billion dollar impacts from because of their apps.
As your final comment suggests any impacts (positive or negative) depend much more on whether there is meaningful market competition than on any "technology" issue
Why did it have more ability to sustain losses than its competitors? It's not like an existing car company that created a rideshare division (the way one created a self-driving car division).
>Those billions in subsidies—and Uber’s demonstrated ruthless behavior—destroyed any possibility that new market entry could discipline Uber’s ability to raise fares and impoverish drivers at will.
Did it? there exist Uber alternatives in medium to big cities. As long as drivers are not forced into exclusivity, and margins aren't too tight (in which case, who's being harmed?), a competing service can arrive.
In Poland there's Bolt, Uber, FreeNow and some old Taxi companies which added an app with a map and price. And many(most?) people have several apps installed and pick cheapest offer.
And m@ny(most?) drivers belong to more than one network.
I don't see natural monopoly here.
Main barrier to entry is lobbying/satisfying all the regulations.
Re: link 13. A person I was close to for many years went through a long pattern where she would receive some new mental health diagnosis, and rather than her life improving as she started receiving more effective treatment for it, she seemed to incorporate each of them into her identity, lower her own expectations of functionality, and become less and less capable of living a normal life. She went from being largely functional (able to hold down jobs, complete college courses, etc.) with some difficulty and occasional stumbling blocks, to being highly dysfunctional (not only being unable to hold down a job, but being asked not to come back to a volunteer position because she was so unreliable that they found it easier to plan around her never being there,) and eventually to the point of rarely ever leaving the house at all. She was continuously in therapy throughout, and claimed that it helped, but from the outside, it never appeared to in any way that I could tell. It appeared to me that every time she received a diagnosis, she would start associating with people online who'd made that diagnosis part of their identities, and take on board all their input about what to expect of herself and how to life her life, and invariably end up worse off for it.
One person who I talked to about this, a friend who I made after I lost touch with the first person, summed up her own experiences on the subject as "Yeah, the mental health community is super toxic."
In the beginning, when she picked up new diagnoses, I experienced a sense of relief; "Thank goodness she'll be able to get some assistance for this condition she's had all along." But eventually, I started to dread the prospect of how she'd respond to any new diagnoses.This certainly won't reflect everyone's experiences with mental health diagnoses, but I don't think she's entirely alone in those experiences either.
I believe Freddie de Boer has said something similar. Unfortunately, I can't quickly find the post where he said people are worse off as a result of having mental illness as an identity they cling to rather than seeing it as a problem to attempt to overcome.
...or rather, that's one of the more-banger ones on the topic, there's a long tail of sundry and similar. I don't think he's done a The Basics summary post on that topic yet. Hopefully in the next book though!
30. Seems like this ignores the elephant in the room: all of southeast Turkey has markedly higher fertility rates. This region, of course, contains provinces with Kurdish majorities. The Kurds may be about a fifth of the Turkish population right now (https://www.cia.gov/the-world-factbook/countries/turkey-turkiye/#people-and-society), but this could very well change in the future—and certainly comes with many domestic and foreign policy implications (see: Syria right now).
It's also the closest part of the Syrian Civil War - I bet if you could dig into stuff like childhood and disease mortality, you'd find them unusually high there compared to the rest of Turkey. I think the biggest driving forces in fertility reduction are major declines in childhood mortality and reliable contraception.
At least that would be in line with Kurds having higher fertility than the rest of the country in Turkey, Syria, and Iran, but lower fertility in Iraq. At least that is what I got from skimming this article. (Didn't read all of it, it's a much-more-than-you-wanted-to-know analysis.)
> 1: Why running for Congress will ruin your life (unless you’re already rich). It costs ~$100K out of pocket before you get campaign funding, and you have to take a ~yearlong break from your career to campaign. If you win, you need to maintain two residences (one in DC, one in your district) on your $175K Congressional salary. Also, you have no power your first term, nobody will let you do anything, and you spend the whole time trying to get re-elected.
It doesn't have to be out of pocket. Self funders are mostly losers. If you can't get a few thousand people to pitch in a two figure amount or a few bigger backers then your chances of winning are low anyway. This guy might know some people who ran but I'll bet none of them got through.
This is really something that I find fascinating because this is a system that's very public and extremely important. But I guess no one really takes the time to look into it? It's weird. While I, as an engaged citizen, have limited influence I at least have more than an unengaged citizen. And it doesn't take that much to be engaged.
I've been thinking about why that is. And I think the reason is that there's little reason to do outreach qua outreach. If I know how to influence politics (and I do to some extent) then spending money and resources explaining that is probably a worse use of time than advocating for what I specifically want. And, like all knowledge, if you don't know something then you can't judge how trustworthy the other person is. Especially because politics is not predictable even for insiders. Also, there's absolutely a dyanmic where rich outsiders get promised results in exchange for cash and then it's just, "Darn, well, we only had a small shot anyway."
Anyway, are rationalists/EA ever going to politically organize effectively or is it just going to be scattershot campaigns that peter out as with that guy who ran for Congress?
Organize in such a way they are able to enact their policy preferences and resist things against their preferences. I could get into specifics of how I think they should do it but there's various ways to do it and my way would just be a strategy not the one true strategy you would have to follow.
Maybe they already are out in California and I just missed it. But it doesn't seem like SF or really anything in California is being run according to tech/rationalist/EA principles let alone anything Federal. It seems like they tend to get an idea, raise a lot of money, then it fails and the infrastructure just kind of moves on to some other EA cause area rather than building durable political influence.
The one mild success I can think of is that they had was a few staffers were persuaded (they seem to be non-EA/rationalist types) on the AI Risk stuff. Which isn't that impressive. It's selling Democrats on regulation which they're already ideologically inclined toward. And from what I've seen it wasn't the AI risk people inserting staffers but instead just persuasion on pre-existing staffers. And not only was that a fairly limited success (non-binding even on Federal agencies) but it's about to get reversed.
I remember some Stanford conservative business school types that were poly/rationalist ranting about how they wanted to influence the Trump administration too and it struck me that they resorted to just putting out a general call on a podcast rather than... anything else. Not even the open auditions the Trump team held. It was weird.
I mean, we have various lobbyists and campaigns and so on, and there's a decent AI risk think tank infrastructure in DC. The main reason things aren't bigger is that SBF was more excited about that than anyone else, we let him do most of the work, I'm told he built a pretty impressive lobbying network, but then it obviously all collapsed and nobody would touch us for a while, except in the places where there was something that had zero connection to him whatsoever, which wasn't that many places (mostly AI).
Sure, SBF was an unforced error but a relatively subtle one. Subtle in the sense that it's an amateur mistake but understandable. Ironically, a movement with deep roots in startup world forgot about key person risk.
If I can be blunt: You have a motivated, rich base that's small but not tiny. The issue is it keeps on shooting itself in the foot. And this isn't a new movement at this point. Why is it so bad at this? Is it just it's mostly engineers so most don't have the talent/interest to do bog standard political stuff? Is it that a disproportionate number are recent immigrants and so disconnected from the American political system? Is it that all the high earning people prefer to work in tech (but isn't one of rationalism's differences that it's willing to pay tech level salaries for stuff like this)?
Or did they just unironically swallow leftist beliefs about how money buys success in politics and end up chasing a mirage? Was SBF successful because his parents had a political background so he knew the basics of things like bundling (which he didn't even do all that well)?
I'm genuinely curious, not trying to be mean. I could equally ask the same thing about Elon accepting a powerless commission. But the tech right is not what you have insight into.
Something better than the SBF-led campaign for that guy in Oregon. Apparently he didn’t check that the guy was a non-starter in the district because the other candidate in the primary had a local network! I assumed I had checked, because it seemed to me like the first sort of thing you would check before trying to get into a political race.
I can't really blame SBF for that one, he just ponied up the cash. It was a new constituency, so I think the idea there by the candidate (Carrick Flynn) was that he had a good chance.
But like you say, the first thing I did was (a) check out who the other candidates running were and (b) check out the demographics of the newly-carved out district.
Sure, he could expect to do okay with the university electorate, but a lot of the rest of the district was farming/forestry. And the other candidate had union ties to the farmers/foresters, and the rest of the slate were campaigning on local issues not "send me to Washington where I'll never come back here again and will spend all my time working on some big brain global issue and not fighting for higher wages and lower taxes for you".
That's precisely the failure mode of rationalist/EA campaigns once they get outside their little Bay Area bubble, and if that sounds unkind I'm sorry, but you can't just win by "hey, here's a great idea!", you have to show to the local voters why it will be a great idea *for them*. And Flynn's local roots just weren't strong enough - it was "I fecked off to the Big Smoke once I could get out of here" versus "I moved here, I'm well-in with all the unions, and I worked in local government here". Of course Salinas won.
Have you guys even tried getting a bunch of rich guys on board to fund a populist candidate who will take over the state/country and get all of your policies implemented?
It should be phrased "(unless you're already rich, currently hold or have recently held local office or have or can tap into a previously-existing financial and voter base)."
Treating "congressman" as an entry-level position is pretty wild; the starting point should be it's something you work your way up to through state politics/partisan involvement/extra-partisan involvement, but which a few people can (occasionally) force their way into with a siege-tower made of money. The people who think "I'm pretty nice and pretty smart, if most people saw how nice and smart I was, they'd definitely want me as their congressman" are likely to be hopelessly naive or loopy narcissists.
Yep, agreed. If you want to skip working your way up you need outstanding achievements in something else and that something else needs to translate well in specifically the voter base.
You can just walk into lower level positions though. There are entry level ones.
There are a number of "one issue, one term" candidates who won as representatives to the national government, but the problem then is that unless they manage to link up with the major parties, or have a good network, then they'll be isolated and never manage to achieve anything, so when re-election time comes up, they'll fade back into obscurity.
A somewhat successful example of this kind of campaigner is Martin Bell, who went from a career in the BBC as war correspondent to standing as an independent against Neil Hamilton (embroiled in scandal at the time) back in 1997 as "the man in the white suit":
"On 7 April 1997, twenty-four days before that year's British general election, Bell announced that he was leaving the BBC to stand as an independent candidate in the Tatton constituency in Cheshire. Tatton was one of the safest Conservative seats in the country, where the sitting Conservative Member of Parliament, Neil Hamilton, was embroiled in sleaze allegations. Labour and the Liberal Democrats withdrew their candidates in Bell's favour in a plan masterminded by Alastair Campbell, Tony Blair's press secretary.
On 1 May 1997, Hamilton was trounced, and Bell was elected an MP with a majority of 11,077 votes – overturning a notional Conservative majority of over 22,000 in the 4th safest Conservative seat in the UK – and thus became the first successful independent parliamentary candidate since 1951."
He did try a second bite at the cherry but this time the big parties didn't play ball:
"In 2001, Bell stood as an independent candidate against another Conservative MP, Eric Pickles, in the "safe" Essex constituency of Brentwood and Ongar, where there were accusations that the local Conservative Association had been infiltrated by a Pentecostal church. In this election, Labour and the Liberal Democrats did not stand aside for him. Bell came second and reduced the Conservative majority from 9,690 to 2,821.
Having garnered nearly 32% of the votes and second place, Bell announced his retirement from politics, saying that "winning one and losing one is not a bad record for an amateur"."
The UK’s very different to the US at a federal level. A congressman has 700,000 constituents. A Martin Bell equivalent (Anderson Cooper?) might do well in the US running through a primary because of pre-existing name recognition, but someone like Richard Taylor (the Kidderminster Hospital MP) would have a much harder time. Your issue needs to resonate to too many people and not be co-opted by someone with a pre-built network.
> 18: Related: Sam Harris says he has been friends with Musk since 2008, but he noticed a sudden shift for the worse in his personality around 2020 which made it impossible to stay friends with him. He gives the example of Musk losing a bet with him that there would be 35,000+ COVID cases in the US, refusing to pay up, and launching personal attacks on Sam when asked to do so.
I've been drawing a lot of analogies between rationalist/tech spaces and the Technocratic movement a century ago. Normally I think of it more as a kind of structural repeat: same underlying forces leading to similar results. But Musk is, beat for beat, going through the exact same journey as specific industrialists in the period. It's weird to see it so close.
You might already know this, but his maternal grandfather was involved with the Technocratic movement (in addition to being a racist and a chiropractor, both of which make me not love him): https://en.wikipedia.org/wiki/Joshua_N._Haldeman
I did not, thanks. This reminds me with the number of former Soviet apologists (or in some cases spies) that are now pro-China. Guess there's something in the genes.
My favorite is Adam Tooze, the leftist public intellectual whose grandfather was one of the most notorious Soviet spies ever. And OK, we don't choose our relatives. But then Tooze dedicated his most famous book to his grandfather, then went on and on in the intro about the influence he had on Tooze's life..... Now, Tooze is one of those 'well I'm not saying I'm pro-China, butttttttt.....' types. What a coincidence!
His father also did some suspect things. Three generations, as they say.
One of the more frustrating realizations I had is how many people who either advocated directly against US interests or in some cases were outright traitors (either communist or fascist) simply ended up better off for it and paid no serious price. And how many of them, especially the leftist/communist ones, are still around and influential.
> 19: Ozy profiles George Perkins, an early 20th century businessman and reformer who thought that monopolies combined the best features of capitalism and socialism, and dreamed of an America where JP Morgan employed everyone with enough benefits to serve as a social safety net. Related: Weekly Anthropocene profiles Ozy.
This is kind of still a popular moderate liberal (think Matt Yglesias) position. The FDR vision is a bunch of oligopolistic companies that are heavily regulated/partnered with the government who use efficiencies and fat profit margins to subsidize unions and generous benefits for employees and replace cash rewards with prestige/promotions.
John Nye pointed out in "War, Wine and Taxes" that Britain had more state capacity in France because it concentrated brewing in some large firms that it could tax (and were willing to pay taxes to keep their monopolies).
I'm fairly sure the Trump administration is going to push for that kind of system anyways, simply due to the fact it gives them more top-down control over corporations. Also lets them keep their base happy by providing jobs and benefits to them while indirectly punishing dissenters.
Nah, the Republicans are highly reliant on small business support. And ideologically committed to it as well. If anything the Republicans have been positioning themselves as anti-big business and pro-small business even more strongly under Trump.
> bunch of oligopolistic companies that are heavily regulated/partnered with the government
This part is also a favorite in autocratic countries of all kinds. Giving the oil company to your cronies is a great way to prevent a rival power base from forming.
Of course, such companies are often very stagnant, but that is a price most autocrats are very willing to pay for stability.
> I have the same question as this Twitter commenter - why is this even happening in Turkey, a country which I wouldn’t expect to be too plugged into Western cultural and political trends?
Turkey's extremely plugged into western cultural and political trends. The Turkish word for secularism is laiklik which is a direct borrowing of the French laïcité and first entered the Turkish vocabulary during the French Revolution. The first reforms in response to the trends sweeping Europe happened in 1792. So three years after the revolution started. It took less than a month for news to get to the Ottoman capital from Paris and it was normal to have it translated and published.
Also Turkey's birth rates declined long before this. The Ottoman Empire's population basically stagnated from 1700 to 1900. The Ottoman population in 1700 was about 27.5 million. In 1914 it was 25 million but they'd also lost some significant territories. But even including them it was only something like 33 million. Turkey's population in 1940 was 17 million and it became 85 million in 2020. So rather than the usual case of high fertility that decreased with modernity Turkey's fertility rate increased with modernity and is now returning to its premodern stagnation.
One issue with all this discourse is it just assumes the past was one global Tsarist Russia with high fertility fueled by a lot of rural peasantry. That was not the case.
Yes, also there are millions of Turkish immigrants living and working in Europe and Turkish Gastarbeiter have been in Germany since the 1960s, so of course Turkey is plugged in. Erdogan represents the part of the population desperate to stop the deeper trends toward secularization and devaluation of traditional male and female roles but he is failing, just as all these attempts to stem the influence of technology eventually fail.
I'm not sure I agree a victory is inevitable. But yeah, Turkey has far more European influence than either the Europeans or conservative Turks like to admit. Though in turn some liberal Turks overemphasize the commonalities. And the wider Turkic world in general is probably the most secular part of the Islamic world and has been for a while. Some of the stuff with Ataturk talking about how he's irreligious but surrounded by a bunch of pious Arabs who see him as a fellow believer could probably be written by some of the Turkish intelligence officials in Syria today.
> 32: China has abandoned “wolf warrior diplomacy” where they insult everyone for no reason. Seems like a smart move.
I think they abandoned it years ago. Roughly in 2022-23. There was a logic to it: it was basically burning diplomatic capital that China felt it didn't need for domestic wins. Now it thinks it needs that for other causes. We will see what happens after the moment passes. Maybe they learn that keeping some dry powder is useful. Maybe they will have new pressing needs.
> 34: Why does China, an advanced economy, have the tap water issues that we associate with developing countries? Maybe because Chinese people near-universally believe that drinking cold water makes you sick, so they all boil their water anyway, so there’s no incentive to have water that’s safe to drink without boiling. I notice there are many things like “Chinese think drinking cold water will make you sick” and “Koreans think you’ll die if you leave the fan on overnight” - is there any health belief that foreign countries make fun of Americans for? (I’m not looking for conspiracy theories about vaccines, more like something we all take for granted).
Wearing shoes indoors.
China is not an advanced economy. Even the Chinese government describes it as in process of modernizing. It also tries to present an image of itself as advanced in a way most of it isn't and a lot of those prestige projects come at the cost of more basic quality of life. Think of the Soviet army with its massive arsenal of nukes but an inability to consistently issue soldiers with socks. The attempt to say, "No, no, we just have a CULTURE where socks (I mean clean water) isn't important" is propaganda cope.
China's idea that it can technology its way out of what is basically a series of more basic and prosaic economic problems is the great hope of many economies in similar straits. And it's never worked. But maybe this time it will, we'll see.
Anyway, Common Prosperity (and no small amount of Xi's popularity AND unpopularity) comes from a furious program of building rural and poor city infrastructure. This involves taking money from cities and coastal areas and investing it in things like making sure remote villages have electricity. He thinks this will also boost economic growth for reasons a standard issue social democrat would agree with. Except it doesn't. But Xi can't be wrong so they have to figure out how to expand that infrastructure and still hit fairly aggressive growth targets. This puts a lot of pressure on lower ranking members and also usually means more demands made of the relatively wealthy business and professional classes. But also it leads to a large amount of cut corners.
Still, it's a net improvement and a large amount of what's fueling support for Xi and nationalism. Xi is, whatever his other flaws, not personally corrupt. And people will forgive a sincere zealot more easily than a corrupt go along to get along type (and Xi's the former). And he's had a program that has improved the living standards of the lower classes. He's combined this with nationalist rhetoric which is always popular at such moments of transition but I don't think he's cynically exploiting it. Instead I think he's genuinely a nationalist who has got many common people on board with the program.
That's certainly what China believes. It's also what Turkey believed. And South Korea believed. And Japan believed. And Mexico believed. It didn't work for any of them. Maybe it'll work for China. If you look at South Korea in particular, who probably broke out of the problems the best, technology was only a part of wider structural reforms that China doesn't look to be willing to stomach.
Also China has about 4 million IT sector jobs vs 5 million in the US. And while they're not as well paid as in the US they're pretty highly paid. But in both countries that's not a huge part of the work force.
Anyway, ignore I said all this. I very much want Uncle Xi and Uncle Sam to get into a bidding war over who can pump more money into my industry. It's very important and will definitely solve everything.
Turkey and Mexico both bet on specific industries with varying levels of success. Mexico ended up growing mostly from integration with the US but suffered issues more related to human capital than technology. They are key to many advanced US supply chains, for example. Turkey also did something similar to Europe but, for basically military/political reasons, invested more in domestic production and as a result they're significantly ahead of East Asian advanced economies (and even further ahead of China) in certain things.
Technology isn't unimportant and you're right that productivity is part of the puzzle. But it's not complete and it's not the only necessary driver of productivity.
Xi has said (in so many words) he thinks the issue was too generous welfare, not a lack of technological progress, in those countries. So he thinks that they can outwork the problem. Which is a fairly typical response for that economic model honestly.
China isn't more successful than Mexico. They're about equal.
I agree they have oversized welfare states. I don't think that it's entirely a story of too much government though. In particular, I tend to emphasize the inability to upgrade mid and low end human capital (which is education) and uneven infrastructure as part of the issue. And I don't think outworking the problem is a real solution. There's only so many hours in the day and you will start to see fatigue as growth slows (as mathematically it must) so the outsized rewards to working diminish.
There's also the consumption issue. China has chronically low consumption which is fairly typical for its economic model but hypercharged in China. The economy is also significantly more government and state owned enterprise controlled. Mexico has significantly more per capita consumption because it has a more normal looking economy.
And the debt issue. And the financial system weakening. And so on.
The Chinese economy needs significant structural reforms and cooperation from its trade partners (which is mostly the US and its friends). I think they're hoping technology is a get out of jail free card. But there's plenty of examples of economies that were innovative but that were not broadly successful.
Well, the real answer is we don't fully know. And he's probably at least some corrupt. But compared to his predecessors and contemporaries he appears to be living a less lavish lifestyle, has excluded his family from positions more, that kind of thing. So it at least gestures in that direction.
He's definitely corrupt. His family was worth a billion dollars or more, and that was *before* he ascended the throne. Xi has periodically retaliated against Western media reporting on his family wealth, and some of them, like Bloomberg, knuckled under. In addition to them getting much better at hiding their wealth, that's part of why you don't hear much about it.
Well, to be clear, we're grading on a curve here. But also: "his family" is mostly his brother in law who made his money in real estate. Of the estimated roughly a billion (I've often seen $750 million quoted) he's at least 300 million and maybe up to 600 million of it. He's a fairly typical story of a red prince who went into business during the boom. That's about half of the almost a billion his family is worth and most of it happened before Xi was in a position to hand out favors. (His brother in law's parents, and Xi's own father, were in a position to help through things like government favors and smoothing regulation of course.)
That still leaves a nine figure amount though. Now, how does someone with an official salary of $22,000 a year have between about a dozen relatives tens to hundreds of millions of dollars? How does his daughter afford Harvard tuition and a nice, trendy apartment? It's not his book sales which officially go to the party. But also he doesn't have Putin style palaces or if he does no one's discovered them. (If you do have some report on that I'd love to see it and update. Also I like to see lavish corrupt palaces in general.)
Additionally, during the height of the anti-corruption campaign, Xi personally led the investigation into his own family and while he found that they had done nothing wrong (of course) he also found some of them had done things that might be perceived as wrong and ordered them to cease various activities. Which might be performative but the guy we have the most visibility into (his brother in law who has significant business abroad) really did sell off a lot of conflicts of interest.
Maybe I'm wrong here. But Xi strikes me as a true believer, not as someone who's grabbing what he can or living a lavish lifestyle.
China is not *not* an advanced economy. "China" barely exists, and is much more decentralized than many in the west assume. Some places in China are 3rd world. Most places in China are 2nd world. A large minority of places are 1st world, with a few (the tier 1 coastal cities) arguably 0th world, in league with other "0th World" places like Tokyo and Singapore. There's an America-sized group of people in China at American (or above) living standards, and a billion people at varying levels below that.
(and yes, Tokyo and Singapore are 0th world. If NYC is 1st World...well, Tokyo and Singapore definitely aren't that.)
What year was that? My impression is that cities like Shanghai have more or less caught up, and the rest of China has grown substantially.
In NYC, no one has cars, people live in 100 year old tenement buildings, and they have to go to laundromats too (hanging clothes outside seems more a cultural thing than a wealth thing. The Japanese can def afford dryers, only a few weird ones buy them).
>having to walk/take the train and hang your clothes on a clothesline
if that's poverty (speaking as a certified poor person) it doesn't seem that bad. Definitely not worth i.e. slashing the welfare state or letting inequality skyrocket to fight.
Poverty in the US felt way worse, one wrong move or accident and you'd be out on the street
Having parts that are advanced and parts that are not advanced is just being not advanced. Many countries have a single advanced, global city that can compete with the best of the first world. China has more than one because of size. But that's the normal dynamic.
There's an American sized number of Chinese people who are living in conditions that range roughly from places like Poland or Romania on the low end to South Korea on the high end. And there's about a billion people living in what are somewhere on the border of third and second world conditions, basically more like Ukraine or Uzbekistan. And a few really rural areas that are poorer than that but are marginal.
The people who live better lifestyles than New York do so because they are elites even within their context. There's very little in terms of bleeding edge technology or even diffusion of that technology that you can't find in first tier American cities. But Singapore or Shanghai benefit from a large number of relatively low paid workers which create conveniences for professional class people that their American equivalents don't have. This is a real benefit but it's also one that's characteristic of a less advanced economy because it means there's a lot of cheap labor floating around. Also, they benefit from an East Asian cultural preference for urban density that creates agglomeration effects.
That isn't to say it's fake. It's not. A software engineer in Shanghai or Singapore really does have access to conveniences and amenities that one in New York City doesn't. But that's not because China's advanced. It's because it's got a bigger gap between rich and poor and that benefits people on the top half of that divide. In effect, if you're a professional, China (and Singapore et al) are the best tradeoff of being modern while still having cheap labor. If you like cities anyway. But you can also get that in places like Bangkok where the nation is more obviously not advanced but has one big international city.
To take a simple example, food delivery is better over there not because their apps are more advanced but because they have more drivers and chefs who are paid less to the point that ordering out is significantly cheaper.
I assure you the reason why Shenzhen, Tokyo, and Singapore are nicer than NYC has very little to do with the availability of cheap labor. Have you been to Manhattan recently? The place is turning into a total dump. The MTA is filled with trash and sketchy people and looks like a dungeon. There are random people just hawking fake gucci bags on the street (which often smells like piss). Tokyo and Singapore are, suffice it to say, not like this. And these "conveniences" are available to everyone, not just elite software engineers.
Bangkok (and Thailand as a whole) may be a better example of this "advanced in parts but not advanced" that you describe. I would agree Thailand is "not an advanced economy" even though there are parts of Bangkok that feel 1st or 0th world. Most of the city is a total dump, and when you leave it, it's even dumpier. There are no major internationally competitive Thai firms either.
China is different here. Unlike Thailand, or Romania or Ukraine, China has dozens of these fairly nice cities. Even more than their relative size would indicate (see India for an example where's there's really only a few). Pick a random tier 2 city and search for "4k walking/driving tour" on Youtube. These places are more developed than you may realize (nicer than Bangkok, at least). And unlike Poland, Ukraine, Romania, or Uzbekistan - these cities are the home to genuinely globally competitive advanced firms. Deepseek is catching up in AI, despite GPU export restrictions. DJI owns the consumer drone market. Tencent, Alibaba, Xiaomi, Huawei, and Baidu go toe-to-toe with FAANG in many areas. Bytedance (and now Xiaohongshu lol) are beating American firms in social media to the extent the Feds are freaking out and trying to ban them. CATL increasingly owns the battery industry. BYD is outselling Tesla. Chinese solar companies have lowered costs so much that we've slapped triple-digit percent tariffs on them (and on BYD too, btw).
To say this doesn't constitute an "advanced economy" because some parts are substantially less advanced is pure, unadulterated, COPIUM.
“Worth less” is more a function of their capital markets being sketchy, to say the least. Tencent even with that has a ~500B valuation.
If you really wanna argue that IBM and HP are really, truly, worth more than Alibaba and Baidu…I’m not sure what to say. IBM??
It’s not that China is richer than the US, or that they work fewer hours, or that they don’t have their own problems. But they’re catching up, *very* quickly, and the West has its head in the sand and is coping via narratives 20 years out of date.
Yes. The crime problem is definitely worse in NYC. Though I will say there are places in first tier Chinese cities where you can find hawkers and pickpockets and the like. They just operate more quietly. And there's plenty of dumps in parts of Chinese cities. Cramped tiny apartments and the like. Not a lot of crumbling infrastructure because it's mostly new but some bad construction that will degrade in coming years.
Also, on a per capita basis, most of those nations come out ahead. Having even a single city comparable with a single Chinese city puts them ahead per capita because China has so many capita and most nations have so few. China's size is a definite advantage. India, which is far behind economically, is not a fair comparison. But also, interestingly, India's investment model meant to stimulate high tech industries has also kind of worked and they produced tech firms and competitive engineers at a much higher rate than comparable (that is, low income) countries. Which would be Nigeria, not China.
China does invest a lot in infrastructure. But this is the classic move: you've now moved on from praising China to putting down other countries. And in this you're incorrect. There's plenty of advanced industry in those countries and internationally competitive firms. Even Ukraine has some things it can do that China's been struggling with for decades. China does lead them in electronics (which is where all of your examples come from). But that's a specific sector and, again, it's common for countries to have dominance in specific sectors. Thailand, for example, does a lot of pretty high end luxury manufacturing.
You're also exaggerating China's lead and achievements even in that space. So by subtly pushing China up and everyone else down you construct a world where China is further ahead than it is.
But you typed COPIUM in all caps after a string of adjectives. So I guess you're right.
I say copium because I think your view of China is accurate as of 2000, but not accurate in 2025.
And I think westerners in general are failing to update on the amount of growth they’ve had in the last 20 years.
(Also, the issue in NYC is not a “crime” problem per se. More a “the city is covered in trash and grime and has obviously deteriorating infrastructure” problem)
My view of China is based on up to date information, qualitative and quantitative, from both China and abroad. And what I've said doesn't resemble the China of 2000 at all when it had a GDP per capita of less than $1,000 and its electronics market share was about 4%.
Also western knowledge of China is stuck in pre-covid times since there was a great deal of hostility to foreigners at that time and most of them left and few returned. Which actually means they tend to assume China is growing faster than it is since GDP growth and technological breakthroughs were faster back then.
I've been to Tokyo, and I'm curious what you're referring to. Certainly the trains are much better than anything in the US, but that's just path dependence + density. Other than that, I'm at a loss for what you could be thinking of.
An America sized group of people at or above American standards PLUS a billion people at a lower but still not that destitute (Vietnam?) level would require much higher PPP GDP for China relatively to the US
Not necessarily true if you assume that living standards don’t perfectly correlate with GDP. Japan PPP Per capita GDP is much lower than the US but I’d argue living standards are comparable if not better in ways.
Trains are better - hell, Infrastructure is better in general. The average restaurant or store feels nicer, service is better. The streets are cleaner. The feeling of safety is much better. Few homeless or junkies - saw ZERO used needles on the ground. More convenient amenities everywhere.
This isn’t a density thing either - I’m comparing to NYC and Boston.
> 45: The Right Looks For Converts, The Left Looks For Traitors. There’s not much in this post beyond a natural expansion of the title, but it’s a snappy phrase, and matches my observation of the past ten years with friends and contacts on both sides. But I found myself thinking about it now because, for the first time in ten years, it no longer seems to be true - the Right has gotten much more into looking for traitors (I have yet to see leftists looking for converts, but anything can happen!), and I’m getting more harassment, illiberalism, and purity testing from the right part of the blogosphere than the left. I still basically believe the Barberpole Theory Of Fashion that cool people optimize their signals to separate themselves from the most obvious group of uncool annoying people in their vicinity; for a long time, that’s been SJWs and the Right has benefited, but I predict this has begun the very long process of changing (cf. Richard Hanania’s political course).
I think it's simpler than this. The US is a semi-dominant party system. The first American party system was the administration party (who supported Washington) and the opposition (who opposed him). Each party system has a dominant party and an opposition party. Since 1933 the Democrats have been the dominant party and Republicans have been the opposition. The dominant party is always marked more by internal fights since it can assume that it will mostly be in charge so what's important is what they do while they're in charge. The opposition party is more opportunistic because it can define itself as opposed to the dominant party. If you want to see this the other way the Republicans were the dominant party from 1861-1933 which is why the Democratic coalition included both socialists and the KKK.
We're probably entering a new party system but the Democrats are still acting like they're the dominant party and the Republicans like they're the opposition. Which is probably still the default unless the new party system makes the Republicans dominant.
I think it's too early to tell if there's a new party system. I recall agnostic of the Akinokure blog (now "Face to Face"?) claiming there was a cycle to transition to new party systems and that was why Biden couldn't beat Trump in 2020... He stopped letting me comment about how he was wrong, so I haven't kept up with him in a while.
I think it's widely agreed we're in a dealignment period. Basically a transition between party systems. In fact I can't think of anyone who disagrees with that.
I do not think anyone has a firm idea of what the transition is to or what the results will be. A system transition does not necessarily mean the previous dominant party loses power. But it does mean radically different coalitions and politics.
I can see possibilities and some possibilities I'm pretty sure won't happen. But that leaves far too much latitude for such absolute "this will happen" claim.
Yes. A less rhetorical way of putting it is: Even if the Democrats put humpty dumpty back together and remain as the dominant party it will be with different politics and a different coalition. So the system will shift.
How does this theory comport with 1980 to 2008? Reagan won by a landslide in 1984 and Bill Clinton governed very far to the right of typical Democrats, and still lost Congress. After Clinton Bush 2 was even further to the right on religion and nationalism.
I could get behind a theory with much shorter timeframes (15-30 years), but the full run of 1933-2025 has counters to the theory.
It's usually put as like 1933-1994 at which point we entered a period of de-alignment that's lasted 1990 to today. But the new system hasn't really solidified yet. Though there's a lot of disagreement around the edges.
It's not that it can't account for it. It's that it sees it as a transition between systems. That these transitions take decades is normal but you usually include it as part of the previous system instead of the succeeding one.
45: If you think think of Jonathan Haidt's Moral Foundations theory, conservatives tend to place a greater emphasis on group loyalty, so you would imagine they'd be the ones more apt to engage in heretic hunting. I can't help but point out that people on the right have been bashing one another as RINOs and "cuckservatives" for roughly as long as I've been alive.
On the other hand, if you're an out and out collectivist, you would need to police your ingroup more closely for free riders and defectors who could undermine your group's solidarity. And I bet Robin Hanson would say that whatever politics you subscribe to, attacking someone for insufficient fealty would be likely to raise your status within your group. Richard Hanania would probably point out that progressives just care more about politics in general and thus are likely to expend more energy trying to enforce norms on each other. So...maybe.
34: here's one possibility: a lot of people seem to think that being cold for a period of time makes you more likely to get sick. You might "catch a chill" as they say.
45 - Conservatives in the 1950s and again in the 1980s seemed much more interested in driving conformity than they are today. The 1950s broke out into the counterculture of the 60s when the left was more open to different viewpoints and gained support. You can also look at related ideas - the left was very much about free speech when they were the outgroup and the right controlled powerful institutions.
It seems correct to me that when a political group is weak they are open to lowering standards for alliances but when they get strong they start the purges of anyone not sufficiently supportive of what they consider core goals.
Woke was definitely "more accepting" of differences in the early years (and wanted free speech, etc.), and then switched to ideological purges and censorship when they felt like they were in charge.
Yes, good point about weak vs strong positions. I was kinda thinking in 'all else equal' terms, but whether your group happens to be the underdog or the uberhund is probably always the more important factor.
But you could also argue that heretic hunting is a sign of weak group loyalty, because you're attacking people within your own group. And then use that to explain heretic hunting on the left, which is also a big thing.
Maybe heretic hunting is how you strengthen group cohesion, though, or at least maintain a certain level of it. You trigger a shame reaction, which gets people to adopt the group's beliefs as their own in spite of whatever their own inclinations might be, or maybe you pick out a few loud dissenters and burn them at the stake or whatever, and everyone else is too intimidated to do anything but go along with the group.
I guess it's a direct proxy of low asabiyyah? Low asabiyyah is usually correlated with success, so whenever someone gets a slight upper hand, witch hunting it is.
28: This premise seems just obviously false to me. Perhaps adults are ON AVERAGE more skilled than teens in every way, but it is not the case that every adult is better in every way than every teen. Some teens have some advantages over some adults.
The naive economic model predicts that, anywhere a teen is employed, there was no adult that was BOTH a better potential employee AND willing to do the work (for the same wage). The fact that many teens successfully find jobs does not contradict this model in any way that I see. It just means that each individual local adult was either already employed, holding out for a better job, or defective in some way relative to the teen--which seems pretty plausible on my general world model.
I do think the claim "human labor will not be worthless" is still technically literally true--the naive economic model just predicts that human wages will fall until they are competitive with AI. As long as the AI costs more than "literally zero", then there is some other "also not literally zero" wage where paying a human to do the job costs the same amount per unit of work performed. (Though it makes no guarantee that this will be enough for the human to survive on; it could be "one cent per decade" or something.)
> The naive economic model predicts that, anywhere a teen is employed, there was no adult that was BOTH a better potential employee AND willing to do the work (for the same wage).
"For the same wage" is the important point. If teens are worse at everything, they still do whatever they have a comparative advantage in, and make less money.
The issue here is that with cheap, smart AI, the market price for a human will likely be below subsistence. Also, it's likely that there would be costs involved in hiring them that are more than they're worth. Like imagine the cost of installing ramps is twice that of hiring an able-bodied employee. You could hire someone in a wheelchair, but it's only worth it if they pay you.
One of the open questions mentioned is whether massive abundance enabled by machines doing most/all human labor will make it worthwhile to grab even a tiny slice of the very large pie.
(I.e., if I'm making $1T/day, inflation-adjusted, I can afford to pay workers $10,000/day and not notice the cost. I'm way better off, sure, but compared to even a worker making $400k/yr. today, they'd be nominally better off under this system.)
Would something like this actually happen, though? Not in a frictionless, spherical world where everyone is a perfectly 'rational' economic actor. But the world we currently live in has a major movement of people trying to 'buy local' to support businesses in their communities, despite the fact that supermarkets, Walmart, and Amazon long ago defeated that business model. Same with 'hand crafted' versus machined. People pay for imperfection sometimes. There's a perceived economic benefit to paying a nominal price to feel more ethical, and we've observed this in the marketplace today. So long as humans remain economic actors, we should expect this trend to continue.
If it 'costs a little more, but gives people much-needed employment', it's hard to imagine an expanded class of uber-wealthy people choosing NOT to spend $10,000/day hiring people to dig holes and fill them back up again, if nothing else.
Seems like you're mixing together "maybe the economy is so productive that humans could be paid 1000x less than robots and still make a living" with "maybe the rich will give charity to the poor". Those are both interesting ideas, but I think it would improve clarity to separate them out.
I'd tentatively be pretty happy with a system on the order of "tax automation to pay for UBI", but I'd still consider that pretty different from human labor actually having value.
To some extent, humans + machines (and technically other forms of capital/accumulated human discoveries) are already so productive they can be paid 1000x more than humans alone, in say 5,000 BC.
To some extent, both ethical scruples and charitable giving have skewed markets away from some theoretical worst-case scenario.
Also to some extent, most advanced societies have some form of government-based wealth redistribution program, if not many.
I imagine all these scenarios expanding as AI becomes more capable.
I'm not advocating the labor theory of value. I'm saying that in economic activity writ large I don't see humans completely excluded from the market just because productivity increases accelerate.
But you're still competing against robots. No matter how massive the abundance is, if it's cheaper to build robot workers than feed humans, it's not worth it to feed humans.
> There's a perceived economic benefit to paying a nominal price to feel more ethical, and we've observed this in the marketplace today.
Yes, we can all live happily as a result of charity. If we program robots to care about all humans, or care about specific humans that care about all humans, that works out well. But paper clippers would see no reason to keep us around.
There's also the possibility that AI that cares about ethics will conclude that it's better to kill one human and use the resources to support trillions of super happy AIs, but I'm with the AI on that one. I just want to maximize happiness, not make sure the people who are alive now are happy at any cost.
I think we're arguing from different scenarios. In your scenario, computers take over the world. In that case, sure, they'll probably get rid of humans once they have no more need of programmers and repairmen.
In a non-singularity environment, where economic activity is preserved but distorted by the availability of unlimited intelligence, I'm arguing that there's historical precedent to believe market forces will continue to value human effort - even in a world where AI can drive, paint, write, and do everything a normal human can and then some.
I'm NOT arguing that human labor will be dominant, or even competitive.
This is all predicated on the idea that AI workers will be significantly cheaper than human workers. Robots are technologically advanced and require significant amounts of advanced components, metal and other resources, electricity, and maintenance. There's going to be a pretty high base cost to general robot labor. Right now machines used to replace labor (automation at factories, for instance) gain benefit from highly specializing in repeated tasks and extremely high volumes.
A general robot babysitting kids, mowing lawns, or doing other manual tasks is going to have very little advantage (and after robot-specific problems may actually be far worse) over humans doing those tasks. LLMs and current ideas about AI labor don't involve general tasks, but specifically the kinds of tasks that humans do with minimal physical action. Writing an email, setting up a meeting, reading Wikipedia.
There will not be a time when humans are expensive and robots are cheap, unless there's a complete change in how our understanding of physics works. There will be downward pressure on human wages, and in many cases humans will no longer be able to do certain work economically. But that's very far from a general "humans can't do anything economically compared to robots." I can't rule out that humans end up doing most physical labor while robots do most or even all of the intellectual labor. That robots will do all of the physical labor just doesn't make sense.
> A general robot babysitting kids, mowing lawns, or doing other manual tasks is going to have very little advantage (and after robot-specific problems may actually be far worse) over humans doing those tasks.
Many parents would pay a big premium for a robot which could babysit their kids as well as a human. And robots already do an enormous amount of lawnmowing.
"As well as a human" is a bit of an interesting question here. What would it take for a robot to have the social and physical characteristics of a living, breathing human? That a human child could bond with and grow healthy social interactions generalizable to later human to human bonding?
There's a pretty high base cost for human labor, too. It takes well over a decade of continuous labor to produce a fully-functional human laborer. Once they're mature, there are still major ongoing costs for housing, food, and medical care.
This is disguised because typically you RENT a human laborer instead of buying them. But we can do that with robots, too, if it makes economic sense to do so.
It's true that cracking intelligence is not the same as cracking robotics, so there might be an exploitable gap there. On the other hand, we already use robots for lots of labor, AGI is likely to greatly accelerate R&D, and robotics will only get better, not worse. And just as humans provide an existence proof for what's possible for brains, they also provide an existence proof for what's possible for bodies.
>AGI is likely to greatly accelerate R&D, and robotics will only get better, not worse. And just as humans provide an existence proof for what's possible for brains, they also provide an existence proof for what's possible for bodies.
Agreed. And specialized machinery to _build_ robots by the millions will reduce per-robot costs (with tradeoffs on how much one pushes optimizations improving the design of robot bodies vs how much one optimizes specialized machinery to manufacture _fixed_ designs for robot bodies).
Yes, but we value humans qua humans. We would want humans to exist even if they were neutral in terms of productivity, and in a society with predictable excess we value humans even if they are net losses in productivity.
So there's a built in buffer for how productive a human needs to be per cost. That is, we'll take a loss in some areas to support a human that we would never accept or even consider for a robot.
It's the same reason a business owner might hire his less-than-competent nephew even at the same or higher wage than an outsider who already possesses the desired skillset. We pay extra to gain in ways outside of direct productivity.
Seems like you have switched from an argument (in your previous comment) that humans will be economically competitive for some tasks, to a new argument that humans will survive because of charity?
Historical humans had to be, on average, at least as productive as their costs. If you didn't bring in enough food to feed yourself, then you died. Excess production improved your living conditions - better house, larger family, better fed.
But tools and machines freed us from a lot of the natural constraints. We grew wealthy enough that large numbers of humans could live while a few did the basics. "Civilization" arose from this process, but it continued in big leaps. We went from a large majority of the population working to get food to less than 5%. We don't think of, say, librarians, as living off of charity, even though they do not produce any of the basic necessities of life.
The way that we value librarians is one sense of what I mean if/when robots are producing our basic necessities. We would continue to pay for librarians for as long as they provide some sense of value, even if that value is detached from survival and necessity. We cannot say that a librarian is or is not economically productive, because the position exists for other reasons. To extend this to a post-robot world, we may value childcare/eldercare from actual humans in the same way. Something that has an intrinsic value that can't be compared to GDP but we want to continue anyway.
The second sense I mean is that a robot would need to produce in excess of its costs, or we would not produce the robot. Even if a company could exactly balance the cost of automating a function with the benefits, they would not choose to do it. We would, though, be okay balancing the cost of a human life with the benefits, even while being economically conscious. We value human life separate from whether such a life has a positive ROI (though we didn't want a negative ROI), and as with the non-farming jobs that we still valued when farming was 90%+ of the population and food was still often scarce.
The answer to #29 on reduced fertility is that you need to split the drop in fertility into two parts.
The first part ended around 1960 or so in the West. This was largely about fewer children per couple (huge drop from around 8 to 3, or the "2.5" used for comedic effect when I was a kid).
The second part from 1960 to today is more complicated. There seems to be a small additional drop in children per couple that largely is due to delay in the first pregnancy. Some of it is also from a big reduction in teenage mothers (this is the group for whom the fertility rate has dropped the most). I suspect confounding between this and reduction in marriage, especially in the black population. Black women have a much lower rate of marriage today than in 1960, but they also have a much lower rate of teen pregnancy. The former did not cause the latter.
I believe in the U.S. that the black total fertility rate recently dipped below the non-Hispanic white TFR.
My impression is a lot of the decline in fertility since the Housing Bubble has been among the lower half of society. They've become more careful the way upper half did long before them.
Yes, it took longer for young motherhood to stop being a source of status for low-income people.
It was never a source of status to be a young *unwed* mother for the upper 4/5ths socioeconomically, but for a while (I remember studies of "welfare moms" from the 70s-80s) for the lowest 1/5th it was a source of status and sense of accomplishment, and adequate money without work requirements.
The biggest TFR drop for blacks occurred in the 1990s (Clinton era welfare reforms and social shaming for teen moms probably both played a role).
Most of the change in attitudes/norms had already happened before the Housing Bubble, but that and the Great Recession that followed probably cemented the changes. In any case, now it appears self-sustaining. There is strong social reinforcement of birth control and abortions for young women of all incomes and ethnicities in order to live a more unfettered, individualistic lifestyle, and pursue education. Hispanics are a partial exception.
On 18: I’ve only ever worked 100hr weeks for a few months at a time, but Musk always read to me as work drunk. Just trying to keep things moving, doesn’t have the stabilizing personal life, and just on task all the time.
Then he had the personal issues with his son “becoming” trans and I think his appetite for subtlety went away.
I also see the Sam bet through Elons eyes, or what I imagine are his eyes. Here’s a guy he’s friendly with, reminding him he needs to fork over a million in cash, while he’s fighting to keep his companies going.
In short: he’s work drunk, over-optimized to keep his business going and all the stuff in the periphery is just sort of falling off. My heart breaks for him a little bit but I don’t think he wants to slow down.
>I also see the Sam bet through Elons eyes, or what I imagine are his eyes. Here’s a guy he’s friendly with, reminding him he needs to fork over a million in cash, while he’s fighting to keep his companies going.
Elon insisted on the bet even though Sam thought it was ridiculous (quoted from Sam's post below):
Elon’s response was, I believe, the first discordant note ever struck in our friendship:
> Elon: Sam, you of all people should not be concerned about this.
He included a link to a page on the CDC website, indicating that Covid was not even among the top 100 causes of death in the United States. This was a patently silly point to make in the first days of a pandemic.
We continued exchanging texts for at least two hours. If I hadn’t known that I was communicating with Elon Musk, I would have thought I was debating someone who lacked any understanding of basic scientific and mathematical concepts, like exponential curves.
Elon and I didn’t converge on a common view of epidemiology over the course of those two hours, but we hit upon a fun compromise: A wager. Elon bet me $1 million dollars (to be given to charity) against a bottle of fancy tequila ($1000) that we wouldn’t see as many as 35,000 cases of Covid in the United States (cases, not deaths). The terms of the bet reflected what was, in his estimation, the near certainty (1000 to 1) that he was right. Having already heard credible estimates that there could be 1 million deaths from Covid in the U.S. over the next 12-18 months (these estimates proved fairly accurate), I thought the terms of the bet ridiculous—and quite unfair to Elon. I offered to spot him two orders of magnitude: I was confident that we’d soon have 3.5 million cases of Covid in the U.S. Elon accused me of having lost my mind and insisted that we stick with a ceiling of 35,000.
I feel like he's always been this 'nuts'. His seems to lean into situations where he disagrees with popular opinion when he can justify his perspective from first principles.
That's how he arrived at electric cars, tunnels, rockets, and most especially colonizing Mars - his highest priority. I think sometimes this makes him come off as weird, especially when what he regards as a 'first principle' is less directly tied to laws of physics.
He also reacts quite strongly when he encounters something that throws him off his stride. That didn't matter much when he had a smaller following and he was talking about getting rid of side view mirrors on cars, building cars from one huge casting, making a fully automated car factory with no humans, or removing buttons and knobs from the user interface. It's more pronounced now because his audience is bigger and the topics more far reaching.
Like you mentioned, he disagreed with his kid about the trans issue and suddenly started talking about the "woke mind virus". He got frustrated with censorship on his favorite app, he started taking about free speech and buying Twitter.
I believe what activated him politically during the Trump campaign was delays in rocket launch approvals for Starship. This was more than just an annoyance, it was a huge problem for his core mission. He perceived that the Biden administration was trying to punish him for weird tweets by holding up launches. But with a tight launch window for getting to Mars every 2 years, Musk doesn't have time to waste.
On a tight schedule, Musk decided to go all in on the opposition. This is why he kept saying the election was an existential decision point for humanity. In Musk's mind, if he doesn't get us to Mars, we'll never go, and if the government makes it impossible for him to get us there, we're a one-planet species and that means we're doomed.
He did that years ago, but the only concrete consequence I'm aware of from that bruhaha was a bunch of mean tweets by Musk protesting the injustice of not getting invited to a pageant that's meaningless if Tesla's not there.
I'm not claiming the decision to delay rocket launches changed Musk's political orientation. I'm saying it was the threshold event that moved him to spend millions of dollars in an effort to change things. Musk is pretty open with what's on his mind. If you listen to his speeches during the campaign, he regularly talked about being at a tipping point for humanity, and about how bureaucratic procedures would soon make it impossible to get anything done. As an example, he would talk about having to do sonic boom tests on seals, and how hard it is to launch rockets.
You're right, though, that he's also dealing with government regulators in his other companies. So I'm sure there's some multi-causal element at play.
This is Sam Harris' account of what happened, and since I don't like Sam Harris, I give less weight to this being the pure unalloyed truth. Not that I'm saying he's lying, but everyone gives their version of what they think happened, which may not be the same as what really happened.
X: I gently reminded Y of the promise he made but Y got unreasonably angry and refused, what a miser!
Y: X was HOUNDING me about a dumb silly joke at a time when I was MASSIVELY stressed and I lost my temper, wouldn't you?
Isn't Harris the one who said no sin is too great to prevent the election of a candidate he doesn't like?
The candidate I don't like gets elected EVERY time, but I don't use that to justify tossing my whole ethical framework. Maybe Harris claims this is/was a one-time thing, but once you cross that Rubicon, it's tough to convince me that he's back to normal thinking again. (Or maybe he was always terrible.)
Anyone who justifies every lie, privation, and conspiracy to undermine democracy for a narrow political objective gets a permanent 100% discount on all future statements.
Revised Monty Hall, Secret Billionaire Goat, solved by exhausting all possible universes:
1/3 odds:
You picked car,
50% odds: Monty reveals bad goat -> Switch is "correct"
50% odds: Monty reveals good goat -> Don't switch is "correct"
1/3 odds:
You picked good goat, Monty ALWAYS reveals bad goat -> Don't switch is correct
1/3 odds:
You picked bad goat, Monty ALWAYS reveals good goat -> Switch is correct might as well get car
In total:
You had a 3/6 chance of being in a universe where the good goat is revealed. Not sure what you do here, probably just offer to take the good goat. If host doesn't allow this, regular Monty Hall rules apply, switch for better odds at car.
You had a 2/6 chance of being in picked good goat universe, where don't switch is correct
You had a 1/6 chance of being in picked car universe + bad goat reveal, where switch is correct
On 17 - if you’ve played a fair bit of PoE2 and then watch Musk streaming the game, it becomes clear very quickly that he is not playing the game at a high skill level (neither knowledge nor execution). This makes the situation more interesting in my opinion, because he should have known how obvious this would be, yet he did it anyways.
I don’t find that persuasive. Aside from the fact that I think he cares deeply what others think about him (though he denies it), why wouldn’t he just tweet that the sky is purple or something? This seems like a very specific and strange way to go about flexing.
Yeah, that's true, but I do find it funny. A lot of people are losing their minds over this, but I think it's "Elon Musk is bad at this game, git gud noob" is very relatable for those of us struggling with "I don't know what your problem is, I just breezed through Act 3 on Cruel setting and I never had any problems, you're just bad at this game" smuggery from others on the discussion forums.
(I do see a lot of "he BOUGHT all his gear, he paid someone to farm it for him!" but in my own opinion, that's what the trading mechanism is POE is anyway, so it's a difference in degree not in kind. Paying someone to play it for you is, of course, different).
RMT/account sharing/boosting != in-game trading. One is a feature, the other is a bannable offense.
People who think they game is too hard have nothing in common with someone lying about being the 11th highest ranked player while someone else does the hard work of actually playing the game for them.
I totally agree that "git gud" smuggery is annoying and toxic, but it's completely unrelated to this.
Musk is one of the most prolific and flashy Trolls to ever Troll. He is also probably the absolute king of shit-posting. In many ways, he is 4Chan personified (or is that de-anonymized?) a smart person who blows off steam by acting like a cretinous idiot. When you look at him in this frame, pretty much all of his actions make sense.
Up until very recently, most people who acted this way in public would be very quickly persona non gratas in any polite company, but now it just so happens Musk is the richest person on earth and happens to control several of the most important and powerful corporate entities on earth, so he can't just be dumpstered. The whole "nazi salute" mess is the prime example of this new dynamic- the legacy media is doing it's absolute best to sink him with the claims, which despite their dubious nature would previously have sunk anyone not actually royalty, only to watch the ADL and the prime minister of Israel defend him, and for twatter users to generate 100x the engagement numbers of their original hit pieces with memes pointing out their hypocrisy by showing any number of videos of Dem politicos doing the same gesture.
The future belongs to the shitposters, and I for one am here for it.
I'm surprised at the fact that despite knowing how counterintuitive the original Monty Hall problem is, people are still approaching this new version without Bayes' Theorem. I mapped it out on paper, hopefully the symbols are self-explanatory and my handwriting is legible: https://imgur.com/a/1BeuxNe
I am so confused whenever someone says this. I first learned of this problem when I was a teen, and found it a clever little bit: an interesting and very intuitive twist on probabilities. This is not a brag, I just genuinely don't understand what's so unintuitive about it. I wonder if people are misunderstanding the conditions (i.e., not realizing the host will never open the door with the CSR behind it).
B) opening the door adds no information (wrong), since I *still* don't know where the car is.
C) therefore, switching is equiprobable to not-switching.
The average joe isn't mapping out the tree of all possible outcomes, he's trying to track the physical location of the car by mutating a single world-state.
The A16Z comparison with the S&P 500 is very interesting! Though kind of funny to post it now through 2018 instead of 2024, I'd love the fuller figures through today and see if this stands up (though the S&P has continued to do so well, it wouldn't be that surprising).
I think this partially has to do with how strong the S&P 500 did over that period -- mid teens returns are something VC is probably pretty pleased with, but the S&P 500 is just way above most expectations for those time periods, even probably ca. 2009.
And the 2016 vintage fund won't have had time to have enough investments grow by 2018 to have a positive return, this is the classic so-called "J-curve." Nothing to see there.
However, I think it does get to the fact that a bunch of VCs are just totally mathematically illiterate (or claim to be) and say that the valuations they invest at don't matter, which they definitely do (I actually made an explainer about this a couple of years ago, alas): https://www.youtube.com/watch?v=B5V1Z5VwaVI
I don't think Marc falls into that category, but a lot of VCs do.
#6. It's been a long time since I've laughed so hard. As a thank you, here is a godawful addition to the list:
"She skimmed through the commentary , scrabbling greedily for actual first lines like a rabid raccoon not yet too far gone to feel hunger, screaming with laughter like an unfortunately ratchet soprano."
Gondola grieved for each shrimp as he ate it and the sound of his own chewing seemed full of pathetic cries, but also some squeaky noises because chewing shrimp sounds that way sometimes, you know?
”I killed her in the prequel, I will kill her in the prequel, I may have just killed her” he murmured, backing away from her live body and stumbling over tenses, her dead body, and also a bunch of luggage and shit.
When she heard Rod’s car in the garage Mona looked for a quick place to stash the vibrator, regretting that it was time to stop cumming and cumming and cumming and cumming and cumming and cumming and cumming.
Re: Turkey's fertility rate. The demographic transition is definitely not only associated with Europe.
Iran had one of the fastest demographic transitions of any country. In 1979, the year of the Islamic Revolution, Iran's fertility rate was 6.5. It dropped below replacement in 2000. This is about as dramatic as China's one child policy.
Speaking as somebody who was a regionally competitive strength athlete and has spent more time in gyms than most nerds have spent reading (and thus have seen a LOT of people on testosterone / gear), I'll chime in.
Zuck: no testosterone, just good diet and exercise.
Reasoning: no mass, no popped traps, average "fit" BJJ body totally attainable with a reasonable amount of training and a good diet
Musk: he's very likely on TRT, and I wouldn't be surprised if he's blasted and cruised a few times (ie taken higher-than-TRT doses a couple of times), but I doubt he's routinely on supraphysiological doses.
Reasoning: he's had some jaw changes when older vs younger, and this can be a characteristic change driven by supraphysiological levels of testosterone. And we know he's probably on TRT because of his tweets. But in terms of the reductions in body fat and increase in fat-free-mass you would see with routine blasting, it's totally absent. He has zero muscle tone in his arms and torso, and he would have more if he was blasting, even if he wasn't training.
Bezos: very much on higher-than-TRT doses. He's got the characteristic popped traps, and pretty significant arm development / size accompanied by the barrel torso you see in guys on supraphysiological levels of gear who are bulking or who have shitty diets. Especially for his age, those are pretty strong tells.
Also, for anyone who's a man over 40 here (which has got to be at least 2/3 of the audience), TRT is an absolutely magic life changer that makes you feel ~10 years younger. It is THE strongest "quality of life" intervention available to dudes 40+.
I have a substack post going over the benefits, risks, and possible side effects with a decent amount of rigor, here's a link:
Which I suppose is MY turn to be surprised, because it sure doesn't *feel* like a bunch of young blades in the commentariat. Of course, the commentariat may skew older than the survey answer-ers.
My intuition was that most of the readers would be between 30 and 40 based on me being early 30s and having found the blog towards the tail end of the SSC era. I'm likewise surprised at how many 20 year-olds there are according to that graph.
> Your list of possible down sides is missing prostate cancer.
When I looked into it, pretty much all of the known events were individual case studies, and it more or less fit with the other "pretty rare side effects likely down to biological variation."
This metanalysis essentially points to the same scarcity and lack of data:
Lenfant et al. *Testosterone replacement therapy (TRT) and prostate cancer: An updated systematic review with a focus on previous or active localized prostate cancer* (2020)
"Until more definitive data becomes available, clinicians wishing to treat their hypogonadal patients with localized CaP with TRT should inform them of the lack of evidence regarding the safety of long-term treatment for the risk of CaP progression. However, in patients without known CaP, the evidence seems sufficient to think that androgen therapy does not increase the risk of subsequent discovery of CaP."
We're certainly in an information tech boom, but outside of social media and search, have there been a lot of particularly notable gains? Not just regular gains, but denoting a "boom" and significantly different than previous eras?
You can say AI, but the practical applications haven't come in. AI is still very much a money-losing operation and will likely not pay off for several years, maybe longer.
There are B2B AI vendors who are printing money right now (Databricks, Palantir, etc.) because enterprise AI is valuable. It's the consumer-facing AI applications that haven't materialized.
I can say AI! I think it's amazing and at least tech-boom-adjacent that I can now pay a pittance and have access to extraordinary technology that didn't even exist five years ago.
I don't really want to quibble on how exactly to define "boom." my feeling is that there's both genuinely new cool stuff (largely around AI) and also I have the general feeling that I have easy access to insanely high quality, insanely cheap consumer tech stuff everywhere I look. It's cool! We don't have to call it a boom.
I feel like I'm living in a tech boom on the scale of a generation, but not on the scale of the past two years (ie if you compare to the dot com bubble of 2000, or the social media boom of ~2010)
Any country where the cost of seeking public office is multiples of the average annual national income should not in any sense be thought of as a democracy; at best, its an oligarchy where the opinion of the demos occasionally gets to act as a tiebreaker
The cost of seeking local office isn't anywhere near that high. My lower middle class uncle was the (elected) head of public works for the rural town he lived in. My upper-middle class coworker used to be on the city commission for the suburban city he lived in. He had to switch to part time at my company in order to do both. But I am not sure if he actually had any significant pay cut, other than having to attend public meetings on some evenings.
Before the recent semi-implosion of political parties in the US, a major role of parties was to find successful local politicians who have promise and help them with the elections for the higher offices. That doesn't seem particularly awful to me.
Reminds me of a comment that a Chinese doctor made at a workshop on reproducibility: the concept that you should standardize medicine and as a doctor you should treat two different patients in the same way and expect comparable outcomes, that struck her as weird and a very Western concept. In Traditional Chinese Medicine, apparently you try to come up with an individual treatment for each patient.
I don't agree with her and doubt that TCM gives better results overall, but it did make me think.
She described it the following way, or that is how I understood it. Think of herbs, and how traditional Chinese doctors would use them. They have a large resort of herbs, and for all of them, they have some idea of what they do. Like some work against fever, some make you sleepy, and so on. When they see a patient, they would look at what symptoms that patient has, and brew a tea from some combination of some herbs. Even if you have only 20-30 herbs, you will probably never end up mixing the same tea twice. It's probably even more complex than that because you can use different dosages, and of course you have also other treatment than herbs.
She found it very alien to give the exactly same treatment to several patients and see how well it works on average. Which is our Western evidence-based approach. Because to her, that treatment would not be the mixture that a Traditional Chinese doctor would have recommend for any of the patients (or at most for one, but then all others would get different mixtures), so it seemed strange to her to expect this to work at all.
Some branches of Western medicine are trending that way, for example, biological cancer treatment. That is often tailored to a single patient. It is costly, though.
I guess the cottage industry of woke-adjacent health fads? For example, prominent US health authorities in the Summer of 2020 saying in unison that BLM protests are Ok cause "Fighting Racism is more important than COVID" while also screaming at much smaller conservative protests a month earlier. Or when they tried to organize Vaccine distribution based on "diversity" instead of vulnerability?
I will take my Eastern European grandma telling me to wear a warm sweater so as not to get "cold" over the entire medical establishment of my country going insane with some new Tumblr religion on a regular basis.
Not quite something people make fun of Americans for, but things that they believe are necessary more than other countries, and are of questionable benefit:
- Yearly general health checkups for healthy people.
Russian healthcare system promotes yearly general health checkups for healthy people (диспансеризация) as the best thing you can do for your health, since it lets you detect and treat any illnesses at an early stage.
> Max Tabarrok: AGI Will Not Make Labor Worthless. Teenagers’ labor isn’t worthless, even though adults are more skilled in every way and there are ~ten times more adults than teens.
His post doesn't mention teens at all, but instead "population growth, urbanization, transportation increases, and global communications technology".
17 reminded me of your On Priesthood article. Are gamers also a priesthood? You can be the richest man in the world, and spend a bunch of money to grind a HC character on your account to be top-15, but in the end, you have to actually play the game and if you don't play the game none of that matters and in fact actually makes you look worse than somebody who doesn't game at all. Speedrunning communities spend a lot of effort to track down the liars in their midst for ~0 benefit.
Is it just that the scientific/journalistic/etc priesthoods actually have cache, while the Gamer priesthood has similar norms but not enough weight to throw around? I haven't watched Musk's PoE2 gameplay nor do I play PoE2 nor have I watched the videos about his account, but I Trust The Experts that he is comically bad at PoE2.
I dunno, maybe I just didn't understand On Priesthood but the similarity feels striking.
...I don't even know what you're trying to say here? Yes, I'm sure everyone agrees that Musk is a terrible person, but why would he care about that? He already has all the power he needs.
He's specifically violating the norms of a community (Gamers) in an attempt to ingratiate himself to the community, but it doesn't work because they defend those norms very highly. It's like somebody coming into a scientific circle and committing academic fraud, no amount of money fixes that.
Yes, yes, he's very rich, but the very fact that he has lashed out at a man best known for using a dead rat as an alarm clock for criticizing him shows he does, in fact, care.
> He's specifically violating the norms of a community (Gamers) in an attempt to ingratiate himself to the community
But I don't think he's even trying to do that. He's just trying to brag that he's such a Gigachad that he can be the richest man in the world, take over a country, and still have time to effortlessly get the world record of his favorite game.
I mean, that's one reading. I think it's more likely that Elon Musk is extremely insecure based on all of his behavior and that he is not, in fact, some kind of uncaring super-robot because money makes you immune to having feelings.
Additionally, it is hard to imagine anybody who is not a gamer caring about your rank in PoE2. Maybe your wife would say "cool" because she's happy you succeeded at that thing you were working on.
No, I'm saying that he does care. It's just that he's also unfathomably narcissistic. This comment from the reddit thread put it best:
> He doesn't want to be one of us, since that reflects poorly on his ego. He has to be superior. His narcissism covers the rest of the gaps that highlight how totally obvious that not only is he not superior to the average person playing the game, but that he's significantly worse.
> Yes, I'm sure everyone agrees that Musk is a terrible person, but why would he care about that?
Narcissists are insecure. They need constant validation. The fact that he deluded himself enough to stream playing PoE2 is maladaptive behavior that comes from the narcissism. What he wanted was to show off his cool PoE2 character and have everybody oooh and aaah at how amazing he is. He is now mad that it is not happening and that people are instead saying he's comically bad at PoE2.
If I had done something this embarrassing I would probably just turn off notifications on the post, and try to focus on other things. Instead, Musk has decided to pick a fight with Asmongold, and lose, because he cares very deeply.
None of this really has anything to do with gamers-as-priesthood, though.
As someone with a few speedrun world records (Diablo II, not PoE), I can confirm the best you can hope for is an "oh that's nice, dear" from your wife. Or maybe that time my parents watched a recording of a charity event and said "wow, you really looked like you were having fun".
I think by priesthood I mean something separate from just skill, more like skill + induction into a prestigious inward-facing community with strong reputational effects. I don't think gamers have strong reputational effects - if you're not Elon Musk and therefore already famous, other gamers don't know what you're doing. I think Elon Musk's fame is a bigger part of this than gamers being a priesthood.
I mean, Karl Jobst runs a YouTube channel with over a million subscribers, and half the content (and the most popular content) is about catching people faking speedrunning, even in obscure games, and he is currently involved in a lawsuit with somebody who lied about his world records (Billy Mitchell) and sued him (and others) for pointing it out. There's a whole stereotype about calling people "fake gamers" for a reason.
Obviously, the only reason anyone outside the community cares is that Elon Musk is famous, but internally the reason they care is that being a fake gamer is bad. If Elon Musk was a mid-level streamer who tried to pass off an account he paid somebody to grind on as his own, it would also sink his rep within the gaming community if it came to light. There was a huge internal drama about Dream manipulating RNG in a Minecraft speedrun a while back, too.
I'd say nah. Because priests are high-status, even outside the priesthood. Commoners pay deference to priests, and look to them for expertise.
Gamers are certainly a "community", in the sense that they have their own social-norms and internal hierarchy. But the average joe doesn't treat their friendly neighborhood gamer with deference or seek their expertise, as they might for a doctor/professor/lawyer/etc. In contrast, when was the last time you consulted your local gosu about build-orders?
(Eh... ... ... maybe South Korea's an exception.)
The Karl Jobst thing, I think can be explained away with generic "schadenfreude, and contempt for dishonesty".
33: How would one design a version of this system which avoided this failure mode?
Just work with people whose problem is anger management, negativity towards other people or some such. Then if they agree to pay you're *sure* it worked.
34: Is there any health belief that foreign countries make fun of Americans for?
Yeah, the belief that burger and fries is a normal meal.
Yes, but do they serve them in school or workplace cafeterias as a main or only option? Sure, people eat burgers everywhere, but in most places I've been (Europe mostly) they are thought of as a guilty pleasure or convenient quick snack, not as a staple part of a diet that you can serve to people on the regular and say you've provided them with meals.
Similarly, in the US whenever there is only one option of food available - museum restaurant, the only roadside cafe for miles, only cafe in a small town etc - it's almost always some kind of burgers or sandwiches, in other places it would be an exception, in my experience.
There are places (e.g. middle east and Balkans, based on what I know), where "ground meat with spices" is very common. Maybe called kofta, kebapche, gyros, losh kebab, doner kebab, chevapi etc. Tastier than hamburgers.
I never understood the burger and fries hate. It's a cheese and grilled ground beef sandwich, with cheese and other veggie toppings and sauces, with a side of fried potatoes. I think people just have associations with "fast food" and people really love being snobby about fast food.
Playing devil's advocate here, but burgers generally have a *lot* more meat in them relative to other components (especially the bread) compared to most other e.g. deli cut sandwiches.
My understanding is that Americans are, in general, fairly unusual in the quantity and frequency with which we consume meat, in no small part due to our heavily subsidized beef and poultry industries.
Every sandwich is listed with 1/4 lb. of meat except for the Triple Dip (1/2 lb.) and the sandwiches where the meat is bacon or tuna (no weight listed).
By comparison, a McDonald's Quarter Pounder with Cheese is... you know what, I'm not even going to link it.
These are all exactly the same thing. Actually, the Quarter Pounder with Cheese has more calories than the deli-cut sandwiches do, so if anything there's _less_ meat relative to everything else.
If you think burgers have a large proportion of meat relative to deli cut sandwiches, you ought to try a real corned beef sandwich from a real deli. Even if you have Swiss cheese and coleslaw on it, the proportion of meat is much larger than a burger.
Other deli sandwiches are similar, but with corned beef it seems more pronounced.
Interestingly, corned beef sandwiches were the main counterexample I thought of, since I have indeed had some with monstrous amounts heap on. However, I get the impression that those are intentionally sandwiches of excess, and most people would consider it odd to have one every day, unlike burgers.
Fast food burgers can be low-priced and convenient, though I wouldn't call them "cheap" for what you get nowadays.
But a burger at a real sit-down restaurant runs $10, or $15 at more mid-range restaurants, and most aren't even particularly crave-worthy. Sure, they're acceptable, but nothing special.
Re 42 - to make the obvious argument, it's because the wokes have a point. Racism, sexism, etc, really were (maybe are?) too accepted in society, even if some people go too far.
As for timing, why now - because some events that made their argument more compelling. The fights over gay marriage that the left won decisively, the Bush administration being unpopular, and then trump saying on tape he groped women and winning a presidential election a month later.
Re 45 - think it's because of which side feels like it's winning/ascendant, which side feels like it's losing. Obama won in 2008 promising to be a compromiser; my memory of 2008 and 2012 is Republican primary debates were the candidates one-upping each other about who is more pure in their conservatism. When Obama won in 2012 it switched (again gay marriage played a big role, 2012 is when it won a popular referendum for the first time, and became clear it would never lose again). (of course this also relates to the above, not surprising wokeness took off on Obama's 2nd term).
Re 46 - I've heard that older buildings had fewer windows because of climate control (I think especially a lack of AC).
>Racism, sexism, etc, really were (maybe are?) too accepted in society
That doesn't answer the question of why wokeness became dominant at the specific time that it did, given that essentially everyone (including woke people) agree that Western society was vastly more racist, sexist etc. in the past than it is now.
I think there's much more to it (social media) but that actually supports the original point: further back in the past, there weren't enough opponents of racism/sexism to sustain a movement.
I think somewhere in the 90s-00s is when anti racism and anti sexism became mainstream--but crucially not so dominant that they had nothing left to criticize.
At notable points in American history, there were enough opponents of racism to fight a war, and there were enough opponents of sexism to get multiple constitutional amendments.
It's not the number of opponents, or apparently even having something to oppose.
I mean, that's not really why the war was fought, and racism remained strong enough for Jim Crow to last 70 years after the war.
But sure, I'm being a bit deliberately vague about what "mainstream" and "anti racist" mean.
And I definitely think social media is important in changing how many people identify as anti-racist, how many anti-racists people perceive there to be, and how people define racism and anti racism.
But the basic argument I'm making is, if "liberalness on race/gender/identity issues" is normally distributed across the population, and if wokeness is the set of points past some threshold on whatever scale we're measuring this, then if the population mean shifts in that direction, you'll see way more woke people.
"Society was less racist in the past" is basically "the mean liberalness shifted in the liberal direction", so under this very simple model you'd expect to see more wokeness, not less, as society liberalizes.
Obviously that's overly simplistic, but I think it's a pretty reasonable first approximation.
Yeah, related to the vagueness I am being a bit facetious. We're dealing with such slippery terms, that mean something different to everyone that uses them and that shift depending on the context, so often constrained and gerrymandered to cover whatever is convenient at the time.
I think the normal distribution example is interesting and certainly reasonable enough, but (as part of the slipperiness of language) I would disagree that mean liberalness increasing necessarily increase wokeness, since wokeness tends to decrease liberalness. But that may be extending too far for a first approximation, and as we get into the second we notice that the model falls apart at the tails.
I mean "liberal" in the one-dimensional "pro-slavery, pro-Jim Crow, pro-traditional gender roles to pro-Black Panther, pro- Valerie Solanas axis" sense.
Like, just a one number summary of "how much do you construct your identity in opposition to anything that can be construed as 60s-era racism".
But none of them really seem particularly unique to the mid-2010s.
Trump was far from the first philandering or handsy President (JFK, LBJ among many others). Bill Clinton had been accused of sexual misconduct, sexual assault and rape on numerous occasions before, during and after his presidency.
Bush was far from the first unpopular Republican President. Reagan only had a 35% approval rating in 1983.
Gay marriage has only the most tangential connection to the racism and sexism you claim was the motivating force behind wokeness. If a massive legislative victory can prompt such a reaction, why didn't anything like this follow Roe v. Wade, which actually /does/ pertain to the racism and sexism you're talking about?
My guess is on interconnectedness. We never had social media that could amplify stuff happening in the world in a matter of hours, largely without any editorial board deciding what should or shouldn't be said. When most of the world came to you via the newspaper or the evening news on TV, it was a lot harder for the viral spread to catch on.
This sounds much less like "wokeness became dominant because it's obviously true and correct" and much more like "wokeness became dominant because of the rise of social media", which was a large part of Paul Graham's original thesis.
I mean, not necessarily. There is true information that is suppressed, or true information that is memetically viral but only in some communities, and total non-starters in others. Maybe in the '70s, a deep understanding of racism and its problems was commonplace in urban and academic areas, whereas in distant rural farms, the inferential gap would be too great to explain it to a farmer.
But now that social media is here, claims can Moloch and toxoplasma their way to ascendancy, and everyone feels the need to have an opinion, be it to defend themselves from others, or just because they feel a need to speak their minds.
Wokeness can be right or wrong, but like all ideas, it needs a good memetic environment to spread.
EDIT: forgot to say where the deep understanding would be commonplace in
The importance of gay marriage is less the size of its "connection" to racism and sexism, and more the speed and size of the turnaround in public opinion, and the size of the victory, not just legally but societally (and gay marriage is emblematic of larger turnarounds on issues around gay people). Abortion wasn't like that. I can't tell you that nothing has ever happened like that, but nothing recent comes to mind.
But just to make the connection, in case not clear to anyone, the woke millennials of 2020 were the same people being pro-gay marriage back in 2004/2008 when that was a pretty minority view.
Re Clinton - before my time, but I don't think anything like the Access Hollywood tape came out. Big difference between accusations and an actual taped admission of guilt.
Re unpopularity - not sure what Reagan is supposed to prove, he won 49 states the year after you cite. Bush was unpopular, never turned it around, and then his side lost big in 2008. This by itself isn't enough, but it was a contributing factor.
Paying out nearly a million in settlement fees isn't quite the same thing as a taped admission of guilt, but certainly the same ballpark.
Additionally, "wokeness became dominant because of how odious Trump is" seems ahistorical. The author of this blog was complaining about social justice (as it was then known) as early as 2013, the "Great Awokening" is usually stated to have taken place in 2014 (the year of Gamergate and the Ferguson riots) and many suspect Trump was elected in part because of widespread opposition to wokeness. The author of this blog even wrote an article arguing that, were Trump to be elected President, it would cause woke people to double down.
"His side lost big in 2008" - you mean a completely different candidate lost in 2008, because Bush had already reached his term limit. That's like arguing that Hillary losing in 2016 reflects poorly on Obama personally.
"Paying out nearly a million in settlement fees isn't quite the same thing as a taped admission of guilt, but certainly the same ballpark."
Definitely is not the same ballpark! Everyone understands people can settle lawsuits not because they're guilty but because they think it's better than going to trial; also, people will usually support their guy in ambiguous situations and so there's a big difference when they actually are on tape admitting it, at what point it's no longer ambiguous.
But also worth noting that part of the effects of wokeness is many Democrats souring on Bill Clinton.
"Additionally, "wokeness became dominant because of how odious Trump is" seems ahistorical. The author of this blog was complaining about social justice (as it was then known) as early as 2013"
Wokeness existing in 2013, "Great Awokening" being coined in 2014, and "wokeness becoming dominant" because of 2016 aren't in conflict.
But more importantly I didn't say it "became dominant because of trump", I said that was one event that made it more compelling (same with the arguments above re Clinton). I couldn't tell you exactly how much of the prominence of wokeness to attribute to various events, percenage-wise, just that all these things contributed.
"That's like arguing that Hillary losing in 2016 reflects poorly on Obama personally"
I think it's clearly true that a sitting president's popularity has a strong influence on their own party's candidate if it's a different person.
People don't focus on this in 2016 in part because it was so close, and in part because Hillary had a lot of baggage, but if people liked Obama more (or less), then Hillary would have likely won (or lost by more).
Do people dispute this with Biden in 2024? Clearly his unpopularity was a major factor.
Disagree about gay marriage being tangential to racism and sexism; I would guess that a lot of the early, "ideological" support for gay marriage came from people who explicitly analogized it to the civil rights movement, interracial marriage, etc.
Support for gay marriage was (among other things) a barometer for how many people supported the ideas of the civil rights movement strongly enough to extend the basic principles to a new contested domain.
The fact that gay marriage became mainstream so quickly I think shows a decent amount of "latent wokeness" in the population; I am not sure I totally understand what argument Jack is making, so I don't know if I agree with it, but I don't think it's crazy to argue that the success of gay marriage might have simultaneously convinced the proto-woke of the early 2010s that the ground was more fertile for their beliefs than it was, and of the essential rightness of their beliefs.
Again, not to dismiss that there were other important factors, or to imply that I totally believe this model, but I think there's something here worth exploring
According to the bugman (and I'm inclined to believe him): he particularly emphasizes that it's because the Puritans who absorbed Marxism during their childhood in the ~1960's gradually became tenured at Harvard. And since Harvard is a priesthood (there's a reason he calls the 4th Estate & Academia "The Cathedral"), Harvard's purity-spiral ethos naturally disseminated to wider society.
(This dovetails with some other theories I've heard elsewhere, the common theme being that it's not enough to look at the surface-level ideology. You also have to look at the cultural substrate in which the ideology is embedded.)
His latest post on GM directly grapples with PG's own musings on wokism, btw.
“Wokeness” isn’t that different than “Political Correctness” in the 90’s. It’s the same argument, just at a higher volume and after some significant victories.
It was hard to understand why Elon Musk would lie so much about video games, but the follow-up is even more baffling. How is it possible that that anyone could *own* social media platform and not understand that content creators have video editors, while having a conversation with a creator about bringing their content to X? (Elon decided to leak these DMs in retaliation for this creator talking about him in Path of Exile.)
I can't even imagine a headspace that would lead to this fundamental of a mistake, no matter how work-drunk or even drunk-drunk. What would even be an analogy in another industry for not understanding something like this?
I'm more confused on why he hasn't disabled community notes on his own posts yet. Obviously there's nothing stopping him from doing that, so it's strange that he's insane enough to keep digging this hole while also having the discretion to not go scorched earth on his critics.
Elon regularly brags about how nobody is exempt from Community Notes, not even him [0]. He's been repeating that mantra too long and would probably look really bad if he backed out now.
This is so disgraceful of Musk. Even if that streamer *did* work for an editor of some sort, that doesn't begin to justify going out of your way to leak those DMs to hundreds of millions of followers! And this man expects me to link my bank account & send in my government IDs to X and use his Everything App?
Reading Graham's essay on PC and woke, I was surprised he didn't link to this classic Bloom County cartoon which captured the whole phenomenon perfectly back in 1988:
Many Thanks! If the Woke insist on changing the currently-polite-term repeatedly, it would be nice if they would at least do it on a consistent schedule (say Jan 1st of the first year of each decade) and publicly announce what the new politically correct term _is_ . Ahh, the joys of language policing...
The point isn't to have people use any particular language. The point is to tell people to use something other than what they're using. An official announcement would defeat that purpose.
In my understanding? Not so much cruelty. The point is that you give someone else a command and they obey you; it's to demonstrate that you're more powerful than they are and they have to do what you tell them.
Is the revised Monty Hall actually any different? In the normal Monty Hall, you have one success state and two failure states; the revised Monty Hall is the same.
Probabilities and statistics always get unintuitive when there's any constraints on distribution. I don't think ones where you can write out the whole probability tree are very difficult, and it's true that, just like the original problem, the correct move is twice as good as the incorrect move, but I wouldn't assume that prima facie - we're eliminating the outcome where Monty opens the door showing the goat you really wanted, so you just need to check the numbers fall out.
> I don't think ones where you can write out the whole probability tree are very difficult
I remember 15 years ago reading EY's Bayes theorem tutorial and he said that we probably got the problems wrong and this is why we need the theorem. No, I got them right because I wrote down all the possibilities in detail, I don't need no fancy theorem.
#14- Felix Hill, Scott's question about what dose of ketamine he took. I did a lot of reading on a Reddit sub about ketamine, one where everyone was taking it under the supervision of an MD. Those who were getting infusions at the doctor's office sounded like they were getting high doses. "K-holing" was considered an especially desirable reboot. I experimented with ketamine provided by a doctor friend, k-holed once, and it was one of the most unpleasant experiences of my life. It's sort of like being unconscious, except that there's still a little scrap of you awake and aware that you are completely out of it. The bad effects of that experience didn't linger though. I was fine the next day.
Sounds like Hill was tried on many somatic treatments once things really started to go downhilll. I wonder if he was ever tried on an MAOI.
> is there any health belief that foreign countries make fun of Americans for? (I’m not looking for conspiracy theories about vaccines
does any non-western-Europe country mock Americans for be too untrusting of American doctors?
"hmm yes I survived the collapse of the soviet union, but Americans should believe the vax are safe because the extremely rich capitalists say so and buy ads on their news networks"
tho given your standard of "new york times doesnt lie" they will just be telling you to get a vaxxine and implying everyone who takes it is fine
> I would expect them to backfire.
For a smaller part of the population then you would hope and it was problably one of the more effective propaganda campaigns; but I yell at the tv every time.
Re. 18 what's wrong with Elon Musk - the longer this goes on, the more I believe that Musk is a narcissist/ sociopath who has chosen the "autistic polymath genius" persona as his mask, and hates hates hates it when he's publically forced to admit that he was wrong, because it threatens both his ego and his disguise. He does have some undeniable talents - one one side, for buying promising companies and hiring talented people and screaming at them until they (sometimes) come up with something impressive; on the other side, for convincing people he's bloody brilliant (until they hear him talk about something they're knowledgeable about).
Drugs are pretty well established and he is just an absurd risk taker in that he made "fuck you money" at least a decade ago and has kept doubling down
Part of the personality shift may be Musk getting more brazen as the fact that he can get away with almost anything sinks in.
Part may just be a shift in public perception. Some people have been calling out Musk's deceptions and bullshitting long before 2020, but back then the façade was still intact, so few people were willing to listen. But the evidence has kept piling up.
Many people I considered to be very reasonable and well-adjusted underwent big personality shifts around then. It was a uniquely polarizing time, and I am very skeptical Sam Harris' side of the story is any more accurate than Musk's.
Isn't it a problem insofar as everyone who invests with A16Z has the option of putting their money in the S&P instead, and unless they expect A16Z to beat the S&P, they'll just do that? (unless they're looking for something countercyclical which, again, I don't think tech is)
You can trivially beat the S&P in an up year by buying the S&P with 2x leverage (or 3x, or whatever). Or just buy the magnificent 7 and ignore the other 493 companies. But then when there's a tech downturn, you get absolutely slaughtered.
What sophisticated investors want is good returns year in and year out, which means buying the S&P and also buying other good investments that are uncorrelated with the S&P.
I don't know if A16Z is actually uncorrelated with the S&P because the S&P is so tech-heavy and there are probably correlations between big and small tech companies. But "did it beat the S&P" is not the main benchmark that most A16Z investors are using.
It's a little weird to me to point to VC as an example of "good returns year in and year out", at least its reputation is that it invests in companies that either strike it big or go out belly up. If you'd want "good returns year in and year out" (which I don't necessarily care about), invest in bonds or something.
> at least its reputation is that it invests in companies that either strike it big or go out belly up. If you'd want "good returns year in and year out" (which I don't necessarily care about), invest in bonds or something.
Yes, but the theory is that these bumpy returns are uncorrelated enough with the other asset classes you own, so that your overall returns get smoothed out (and are higher) over time.
The way you get consistently good returns is to invest in lots of uncorrelated things that all individually do well on average.
Bonds are nice but face a lot of interest rate and inflation risk. So you want to be invested in other stuff that does not face those risks.
Also bonds have capped upside which is not great if you're ok taking on more risk to get higher returns.
"The way you get consistently good returns is to invest in lots of uncorrelated things that all individually do well on average."
To me startups sound, if anything, like the opposite of this. Not uncorrelated, don't necessarily do well individually on average.
"Also bonds have capped upside which is not great if you're ok taking on more risk to get higher returns."
The whole idea of trying to get consistently good returns is presumably to accept lower overall returns in exchange for lower variance. If you're going to not do something because it's lower returns overall, fine, but then you're back in territory of more risk and bad return in some years.
> To me startups sound, if anything, like the opposite of this.
US startups as an asset class are probably uncorrelated with other asset classes like large cap stocks (i.e., S&P 500) or REITs or private credit. So if you're a rich person or university endowment holding REITs and private credit and large cap stocks, it could be very reasonable to add A16Z as a way to diversify.
> accept lower overall returns in exchange for lower variance
The goal of diversification is to lower variance *without* compromising risk-adjusted returns. If you expect three uncorrelated assets to each return 8% over the long term, you get a much lower variance by investing in all of them equally than by going all in on one of them.
There's lots of fancy math to it, but basically if you know the expected return and variance of a bunch of assets, and your own risk tolerance, you can plug that into a formula to tell you how much to invest in each asset: https://en.wikipedia.org/wiki/Modern_portfolio_theory
I think there are several things at play here.
1) S&P500 had a very strong performance over this period, much stronger than the vast majority of ex-ante expectations. Index returns are generally hard to predict. This means that the update from A16Z underperformance over a period is not very large.
2) Almost everyone who invested in A16Z already has a much bigger position in S&P500. The decision they had to make was whether a marginal dollar of their investment should be allocated in A16Z. This may have been optimal even if A16Z was ex-ante expected to underperform S&P500. For example, if an investor thinks that A16Z is less than 100% correlated with S&P500, and is expected to be lagging, an optimal portfolio might still include an allocation to A16Z for diversification reasons. This portfolio would be expected to have a lower return than S&P500 but higher expected return per unit of expected risk.
About the revised Monty Hall problem. Prior to choosing or any doors being opened, you know there will be a 2/3 chance Monty has the prize you really want (the special goat). He opens a door and shows you a prize you don’t want (the non-special goat). He still has a 2/3 chance of having the prize you really want. Switch. Maybe I am not understanding what this variant is supposed to add to the original problem.
Even with the original one, if he intentionally reveals a door you don't want and you didn't select, you get some information about the door you didn't select, but none about the door you selected, while if he just randomly opens a door and it happens to be one you don't want and didn't select, then you learned something about the door you selected.
This one changes things yet again - he's trying to avoid one particular door, but it turns out that's not the door you care about, so you learn something different about each of the doors than in either of the other variants.
The difference is that Monty thinks you want the car, and might have opened the door to the special goat. That he didn't is information.
> He still has a 2/3 chance of having the prize you really want.
I think this is where intuition fails. Before he opens the door, you're right, there's a 2/3 chance you don't have the special goat.
But after he opens the door, you've learned something - that it's not possible you have the normal goat! And that 2/3 chance of you not having the special goat was predicated on there being a 1/3 chance that you **could** have the normal goat! So it's no longer true.
And more, since there's a 100% chance he opens the normal goat door if you have a special goat, and a 50% chance he opens the normal goat door if you have the car, you've learned even more! You've learned it's more likely that you have the special goat, since it's less likely he would have opened the normal goat door if you have the car, and more likely if you have the special goat.
Here's my solution to the revised monty hall problem, from tabulating all the options. Just to explain the notation, assume without loss of generality that you always select door 1 and the host then opens either door 2 or 3.
Let's say the true situation is:
c g g
That is, door 1 has a car and doors 2 & 3 have goats. After the host reveals a door, the situation becomes:
c g g -> c g
That is, the result of staying is car and the result of switching is goat. Let's warm up with the original monty hall problem. In this case, the 6 possibilities are:
c g g - > c g
c g g - > c g
g c g -> g c
g g c -> g c
g c g -> g c
g g c -> g c
Perusing the right hand side, if you stay 2 out of the 6 options give you a car, but if you switch 4 out of the 6 options give you a car.
Now switching to the revised monty hall problem, where g* is the desired goat. The 6 options are:
c g g* - > c g*
c g* g - > c g*
g* c g -> g* c
g* g c -> g* c
g c g* -> NA
g g* c -> NA
But notice that options 5 and 6 are not possible by the problem setup. The host will not reveal the car (because they are playing original monty hall), but you know that they also did not reveal g*. Out of the remaining 4 options, 2 of them have the desired goat behind the original door and 2 of them have it behind the alternate door. It's 50-50.
Your mistake is in not considering the possibilities where the host reveals g*. The problem states "_host is unaware of this_". In the c g g* case, both c g* and c g are equally likely - these two worlds should get half the probability as g* c g -> g* c world (host has no choice).
if the host reveals g* then I definitely know what door to choose.
true, host's equally likely to show g or g* - 1/2 each. each of these scenarios are half as likely as the one in which you pick g* and he shows you g.
Does that mean that if I take the normal Monty Hall problem and simply paint the goats different colors so you know which one Monty opened -- then I would also be indifferent between switching and not switching?
Yes! Because described situation is different - the goat X was shown instead of one of the goats was shown. This removes possibility of you selecting goat X, so 1/3 of possible cases are removed.
EDIT: more than 1/3 cases are removed:
- you selecting bad goat (1/3)
- Monty selecting good goat after you selected the prize (1/6)
It feels wrong and unintuitive that the optimal strategy changes because of an arbitrary difference that doesn't seem to affect anything physically. Can you describe it in a way that makes it intuitive?
It doesn't change anything, supposing a red and blue goat and you pick the first door (door choice doesn't matter):
R B C -> blue revealed, should switch
R C B -> blue revealed, should switch
B R C -> red revealed, should switch
B C R -> red revealed, should switch
C R B -> red or blue revealed, should stay
C B R -> red or blue revealed, should stay
Switch is still 2/3 compared to Stay at 1/3
You just colored goats. You did not specify which goat is presented.
For example if goat B is presented, then both cases 1 and 2 are removed and one of the cases 5 6 is removed too.
The change is that you and the host no longer care about the same thing. The host is still making decisions based on the car, but now you're making decisions based on the good goat.
It's like playing chess, but secretly you win if you take the opposing queen. Your opponent plays the same, because they don't know anything is different. But your strategy should change because your victory condition changed. (Maybe you aim for an early queen exchange.)
Back to the Monty Hall problem, let's say there's 1,000,002 doors, and 1 car, 1 good goat, and 1 million bad goats. You pick a door at random. The host opens 1 million other doors, all of which have bad goats behind them. The only doors left are the one you picked, and this one door way over there that you didn't even notice. That other door almost certainly has the car, which means that (by a amazing coincidence) the door you picked almost certainly has the good goat. So if you want the car, you switch doors, but if you want the good goat, you keep the same door.
(It'd be much more likely for one of the million doors that the host opened to have the good goat, but sometimes we get lucky.)
No, you did it wrong. In your scenario, the host is FORCED to open a million doors (and only show bad goats). This can ONLY happen if you first luckily picked the good goat or the car. The entire scenario is very, very unlikely; if you randomly pick your door, almost certainly you would have picked a bad goat and then the host would have revealed the good goat elsewhere. But that got ruled out by the scenario! All those likely worlds didn't happen. You are forced into a world where your initial lucky pick MUST have been either the good goat or the car. And it's 50/50 which it was ... and so it would also be 50/50 if you switched. Whether you're interested in the good goat OR the car. Switching doesn't matter.
This is very different from the original Monty Hall problem. In the original problem, the goats are identical, so the host can ALWAYS find an extra goat to reveal, no matter what you pick first. That doesn't happen in this new scenario. If you "accidentally" pick the (or "a") bad goat ... the host CANNOT complete the new scenario. So you have to begin the analysis by RULING OUT any possible world where your first choice involves a bad goat. Those worlds didn't happen.
It feels wrong and unintuitive that the optimal strategy changes because of an arbitrary difference that doesn't seem to affect anything physically.
It is wrong
No, that person's math is wrong. If you picked goat A, the host will always reveal goat B. But if you picked the car, there's only a 50% chance the host reveals goat B. So the chance you picked goat A is 2 times the chance you picked the car - thus a 2/3 chance switching will get you the car and a 1/3 chance it doesn't.
Your solution contains an error - the 4 cases don't actually have equal probability.
Consider the original Monty Hall problem, but with numbered goats. It actually has 8 cases, not 6. In case the player chooses the car, Monty has 1/2 chance to reveal either goat. So the actual cases are:
c g1 g2 -> c g1 (1/12 probability)
c g1 g2 -> c g2 (1/12 probability)
c g2 g1 -> c g2 (1/12 probability)
c g2 g1 -> c g1 (1/12 probability)
g1 c g2 -> g1 c (1/6 probability)
g1 g2 c -> g1 c (1/6 probability)
g2 c g1 -> g2 c (1/6 probability)
g2 g1 c -> g2 c (1/6 probability)
When you add up probabilities, you still end up with 2/3 chance of a goat when not switching and 1/3 when switching.
In the revised problem, the initial probabilities are the same, but the reveal excludes every possibility where the good goat would have been revealed. So the cases are as follows:
c g* g -> c g* (1/12 initial probability, 1/6 normalized)
c g* g -> c g (excluded)
c g g* -> c g (excluded)
c g g* -> c g* (1/12 initial probability, 1/6 normalized)
g* c g -> g* c (1/6 initial probability, 1/3 normalized)
g* g c -> g* c (1/6 initial probability, 1/3 normalized)
g c g* -> g c (excluded)
g g* c -> g c (excluded)
So in the end, you still get 2/3 chance to get the goat when not switching and 1/3 when switching, just like in the original.
ohhh .... nice
Yeah the optimal strategy is to stay as there is a 2/3 chance you picked the good goat given the bad one is shown. Here are several arrays showing this. Initially I screwed up myself but someone quickly pointed out my fallacy: https://x.com/pwalshmusic/status/1880355906569003337?s=46&t=wbf9WfI-2y6NxCrWwiqnNQ
I just realised this myself.
Assuming you're convinced that in the original Monty Hall problem you can increase your chance of winning (getting the car/million bucks/whatever), you don't actually need to do any probability calculations; they're already done, you just want the inverse.
It is obvious that with the new alternate win condition (getting the billionaire's goat) you maximise your chance of success by doing the opposite of what would increase your probability of getting the car.
The way I rationalized it is to say that because the host doesn't distinguish between the goats, then without loss of generality he picks the first goat door available.
c g g* -> c g* switch
c g* g -> NA (host would have shown g* - this is the tricky one)
g* c g -> g* c stay
g* g c -> g* c stay
g c g* -> NA (host would have shown g*)
g g* c -> NA (host would have shown g*)
My understanding is this problem condition says that:
1) Monty opens another door
2) Behind it is an ordinary goat
There is only one ordinary goat in this variant of the problem, so we can conclude from the condition that we have not chosen the door with the ordinary goat in any of the worlds in which the condition places us.
Hence, behind our door there is a car and a valuable goat with equal probability. Hence, it's 50/50.
Where am I wrong?
agreed, it's 50/50
Edit: no it's not. My thinking was: You can't have chosen the bad goat, since we assume Monty reveals it. Therefore if you chose the car, you need to switch, and if you chose the goodgoat, you need to stay. 50/50
It's wrong; following possibilities don't have the same probability:
1. You choose the car, Monty reveals the badgoat
2. You choose the goodgoat, Monty reveals the badgoat
If you choose car, Monty either reveals badgoat or goodgoat, one of which is excluded by the assumption. Therefore #1 has half the probability of #2
It seems to me that this reasoning is wrong. We should consider probability as a measure of our ignorance. That is, if a die fell and we didn't see it, the probability that it rolled a 3 is 1/6. If the die fell and we saw it, it is either 0 or 1.
In this case, our ignorance is reduced by the condition itself. We know a priori that Monty will not open the car, the prized goat, or the door we chose. This completely rules out situations in which we chose a door with an ordinary goat. In effect (due to some magical intervention by the author of the condition) we are choosing between two doors: the one with the car and the valuable goat. And the probability of choosing one of them is the same.
The point where we find out that monty won't reveal the good goat is causally downstream of the randomization of the doors. So it carries information about the doors.
No, I withdraw my objection. I still can't grasp it by intuition, but my own calculations show that player should not switch. So there's the Monty Hall version of the problem I broke down on!
I realized where my intuition was wrong! I had assumed that since the condition excluded the choice of a door with a regular goat, the total probability of the worlds available in the problem was 2/3. Since the choices of the door with the car and the door with the nice goat both have probability 1/3, it turns out that they are equally likely: (1/3)/(2/3)=1/2
But the condition rules out more than this! It also rules out “I chose the door with the car and Monty opened the door with the good goat”, which has probability 1/6. Then everything converges: the total probability is 1/2, the door with the good goat has probability (1/3)/(1/2)=2/3, and the option “the door with the car + opening the door with the bad goat” has probability (1/6)/(1/2)=1/3.
doesn't the total probability being 1/2 constitute the same rule out as choosing the door with the car and then the good goat being revealed?
that is, if we take Monty showing us a bad goat as a given, we can't use that given twice to estimate probabilities of other things, i thought.
alternately phrased, since seeing the good goat is ruled out, we can't use a scenario where we see the good goat to get a probability, is my intuition.
Edit: I think I get what you're saying, the phrasing just confused me you mean that given that we can rule out the situation in which you picked the car and Monty shows the good goat because that's not included in the problem, there has to be more opportunities to pick the good goat because you can't see it when Monty shows you a goat. I don't know why this is so confusing to read or explain, given that you're obvious intuition is to stay.
Yes, You haven’t chosen the ordinary goat. And if you have chosen the car Monty hasn’t shown you the valuable goat which would be unplayable anyway and also isn’t part of the initial constraints.
So there are two options - you are on the door with the valuable goat, or you have chosen the car. 50-50
Except it's half as likely that you picked the car:
If you picked the valuable goat, 100% of the time Monty shows you the non-valuable goat.
If you picked the car, 50% of the time Monty shows you the non-valuable goat. By assumption we eliminate the other possibility.
So in two out of three cases where you see a non-valuable goat, you've picked the valuable goat.
Yes indeed this is correct. Optimal choice is to stay. Sometimes it’s easier to grasp these conditional probabilities with a tree diagram with hard numbers (1,000 simulations for instance) rather than pure probabilities. Here is one of helpful: https://x.com/pwalshmusic/status/1880355906569003337?s=46&t=wbf9WfI-2y6NxCrWwiqnNQ
My thought with the revised Monty Hall problem: we know from the normal one that you should switch if you want the car, which implies you should stay if you want the (other) goat. The only difference is that you can tell the goats apart, but I can't imagine adding that to the original would change anything.
That is my impression as well.
I'm never sure why there is ever even a debate about this kind of thing. Not because the math and logic isn't tricky (it is), but because it's very straightforward to simulate this situation one hundred times with simple python code, so we could just... check.
Indeed:
import numpy as np
staywin = 0
switchwin = 0
total = 0
for i in range(10000):
player = np.random.choice(np.array([10,20,30]),1,replace=False)[0]
if player == 10: #per problem definition, we ignore all worlds where this happened
continue
if player == 30: #you picked the special goat, host will always reveal normal goat and not car
staywin += 1
if player == 20: #you picked car, host will reveal one of the two goats randomly
host = np.random.choice(np.array([10,30]),1,replace=False)[0]
if host == 30: #host picked special goat, we ignore worlds where this happened per problem definition
continue
else: #host picked normal goat
switchwin += 1
total += 1 #increment on possible worlds
print("Stay Win: {:.2f}%".format(staywin/total*100))
print("Switch win: {:.2f}%".format(switchwin/total*100))
>>
Stay Win: 66.87%
Switch win: 33.13%
Thanks. Of course then the trouble is to check if the code makes sense, but that seems easier than to check the math.
Here's a simpler implementation: https://pastecode.io/s/o5scz95i
In science, one shouldn't believe an experimental result for certain until it is confirmed by theory. One should also not believe a scientific theory until it is backed up by experiment.
In any case, testing it by experiment shows you the answer in this case, as Naremus demonstrated. But what do you learn, other than the answer?
I found it paradoxical that if you get at least 23 random people together that at least two will likely (>50%) share a birthday. When I delved into this to understand why, and understood its truth, I concluded that it is more likely, if one buys three easy-pick lottery tickets, it is more likely that one ticket will match another than for any of them to have the winning combination.
I don't disagree. But if you look at the other discussions, people do not just disagree about which is the right theory to reach the correct result, they also disagree about which is the correct result. And that just seems like a waste of brainpower, when it's so easy to check.
Also sometimes you just need the result for some practical purpose, for example to win the game described if you actually participate. This is less applicable here, since the entire point of a thought experiment is obviously to get you to think.
That's also the impression I got. Seems pretty straightforward to me. So I can't tell if I'm under-thinking it or if everyone else is over-thinking it.
If you see the good goat when the host opens the door there’s nothing playable. That rules out two options.
1) you’ve chosen the bad goat. Bad luck you can’t play. You can only switch to the car. You can’t chose an opened door.
2) you’ve chosen the car and the host shown the good goat. Bad luck. You can’t play.
You have two remaining workable chances. So there are only two playable options.
A) You have chosen the car and the host has shown the bad goat. Switching here will get you the good goat.
B) You have chosen the good goat. Switching here will get you the car.
So it’s 50-50 of all playable options.
There are two 'playable' options but they aren't equally likely to occur: having chosen the good goat initially is twice as likely as having picked the car initially, because half the worlds in which you picked the car and the host shows the good goat didn't happen per the problem definition, but could have happened because the host doesn't know not to pick the good goat.
Assuming the doors are random, and you pick door 1 (the door you pick doesn't change any probabilities), there are 6 possible configurations
G C B #+1/6
G B C #+1/6
B C G #X
B G C #X
C G B
C B G
The important bit, I hope you agree the last two worlds have a 1/3 chance of occurring (1/6 for each ordering) prior to the host revealing a door. Now, the last two break down (using [] to indicate the hosts random choice) into two further possible worlds based on the hosts random choice after your initial choice:
C [G] B #X
C G [B] #+1/12
and
C [B] G #+1/12
C B [G] #X
So adding up possible worlds that you could still be in, we get 1/3 you've picked the Good goat initially, and 1/6 that you picked the car and the host showed you the bad goat. Since having picked the good goat initially is more likely, we should stay.
You can do the same cheat than with the original problem, to make it more intuitive:
There are 1000 doors, with one car, one very special goat, and 998 not so special ones.
You choose a door and Monty opens 998, revealing 998 regular goats. Since Monty is working under the impression that you want the car, the probability that the car is in the door you didn't choose is 999/1000, so you should definitely not switch.
Monty doesn’t have enough information to open only doors with the rubbish goats, so you can’t generalise this problem in the same way as you could the original Monty test. In the original, Monty has to open only goat doors, in this case he doesn’t care about the valuable goat.
So in the original Monty is not going to show the car in any of the N -2 (998 here) doors he opens, in this case the valuable goat will appear most of the time in his opened doors.
Yes, Monty wouldn't care about the type of goat, and in most cases he would show it with the rest. But as in the 3 door problem, there just isn't a problem to solve if he shows you the good goat: you can't get the goat and just pick the (door most likely to have the) car. However, if he hasn't shown you the GOAT goat, it's more likely to be in the door you already picked.
This version of the problem seems to place us in a universe with six possible combinations, but the twist is that we actually find ourselves in one with only four possibilities: it is impossible that we initially chose the bad goat. In the original version, we don’t learn anything that changes the odds of switching; in this version, we don’t learn anything at all, although the game is phrased in a manner that leads us to believe that we have.
Play it this way: I’ll be Monty and start by opening the door with the bad goat and then let you pick another door. Now I’ll offer you the chance to switch. Rather obviously, you’ve learned nothing since your original choice and so the odds cannot have not changed.
The original version can be rephrased to make the correct choice obvious: I’ll let you choose one door, and then immediately give you the option of holding or switching for both other doors, one of which is certainly worthless. What fools us is thinking that being told precisely which door is worthless changes anything important.
In the classic formulation of the Monty Hall problem, there are three outcomes.
1. You initially pick car. Monty reveals a goat at random. If you stay you win, if you swap you lose.
2. You initially pick goat A. Monty reveals goat B. If you stay you lose, if you swap you win.
3. You initially pick goat B. Monty reveals goat A. If you stay you lose, if you swap you win.
So in 2 out of 3 equally likely scenarios you win if you swap.
In the revised version there are three different outcomes.
1. You initially pick good goat. Monty reveals bad goat. If you stay you win. If you swap you lose.
2. You initially pick bad goat. Monty reveals good goat. You cannot win.
3. You initially pick car. Monty reveals either good goat or bad goat with 50% probability. If he reveals good goat, you cannot win. If he reveals bad goat, you win if you swap and lose if you stay.
By revealing bad goat, Monty has inadvertently eliminated all of option 2's probability weight and half of option 3's weight. Those are worlds you could have been in but now know you are not. So you now know you are in either scenario 1 or scenario 3. But because scenario 3 has only a 50% chance of revealing the bad goat while scenario 1 has a 100% chance, you are twice as likely to be in scenario 1. If you stay you are therefore twice as likely to win compared to swapping.
What makes the Monty Hall problem so counterintuitive is that people tend to think about a probability as a property of the specific scenario. Eg. 3 doors 1 prize must always have the same probability. In actuality, a probability is a statement about *how much we know* about a scenario. In the original, Monty knows where the car is and will always reveal a goat. In the revised version, Monty does not know which goat is good and which is bad, and his actions reveal different information.
Of course the real best play is to try to win the car then offer to buy the good goat after the game is over.
Most concise analysis, so far. (in my book, anyway)
A) 1/3 chance you picked the correct goat and the host showed you the wrong goat. The situation where you picked the correct goat and the host showed you the car cannot occur based on the rules. In 100% of cases here you will see the wrong goat, so seeing the wrong goat gives you no information on whether you picked the correct goat or the car.
B) 1/3 chance you picked the car, but when the host shows you the wrong goat, you can eliminate the scenario where you picked the car and the host showed you the correct goat. This is 50/50.
C) 1/3 chance you picked the wrong goat but when you see the wrong goat you can eliminate these scenarios.
Of the scenarios not eliminated by assumption, in 2 you picked the correct goat, in one you picked the wrong goat.
He's more likely to have shown you the normal goat in the "you picked special goat" situation (100%) than in the "you picked car" situation (50%) or the "you picked normal goat" situation (0%), so you do get to update your probabilities: You're more likely in the scenario where you're more likely to have seen what you actually saw. (Unlike in the original problem, where you see "goat" 100% of the time.)
It's actually equivalent to the original problem: 1/3 odds your original pick was car, 2/3 odds your original pick was _the other goat_.
I think the easiest way to express what's going on is with the odds form of bayes theorem.
In the original monty hall problem, say we hypothesize that we picked the right door from the start. We believe this to be true with 1:2 odds (since there's 1 car and 2 goats).
Then, monty shows us a goat. If we picked the car, that'll happen 100% of the time, so we don't adjust the 1. If we picked a goat, that'll also happen 100% of the time, so we don't adjust the 2. Thus, after showing us a goat, we still believe that our initial pick is correct with 1:2 odds. This implies that the *alternative* is correct with 2:1 odds, so we should switch.
Then, consider the revised problem.
There's 3 cases: We picked the special goat, we picked the normal goat, or we picked the car, each with equal odds, so 1:1:1.
Then Monty reveals the normal goat.
Under the hypothesis that we picked the special goat, this is unsurprising. That's what *must* happen, so we don't adjust.
Under the hypothesis that we picked the normal goat, this is impossible, so the odds go to 0.
Under the hypothesis that we picked the car, this is *surprising*. We'd expect to be shown the normal goat only half the time (we'd see the special goat the other half), so our 1 gets multiplied by 1/2.
After the information, our odds are at 1:0:0.5. We can re-normalize to 2:0:1, meaning that we expect there's a 2/3rds chance that our door has the special goat and thus we should stay.
I think the easiest way to solve the revised Monty Hall problem is to use the solution to the traditional Monty Hall problem.
No matter what you desire in your heart of hearts, since Monty doesn't know about it, it will always be the case that switching has a 2/3 chance of winning you the car. Therefore not switching has a 2/3 chance of winning you whichever goat has not been seen yet.
In the situation described, the goat that hasn't been seen yet is the valuable one, and so you have a 2/3 chance of getting it by not switching.
No, wrong. Because in the original, the host can ALWAYS open another goat door. But in the new scenario, if you originally picked the bad goat, it is no longer possible for the host to open a bad goat door. In the new scenario (unlike the original), that possibility cannot happen.
> No, wrong.
Kindly was right. The new problem is trivial to solve in terms of the old problem. The host never seeks to open a bad goat door. All goats are alike to him.
But does the host EVER open a door with a good goat? The description says no, it didn't happen. But it ... "could have" happened? You have to be careful in probabilities, with assigning weights to possible worlds that you know did not occur. In what precise sense "might" Monty have opened a good goat door -- given that we know he didn't?
It matters a lot what space of games you're drawing from. We KNOW that, in the games we're considering, you CANNOT have originally chosen the bad goat door. This is different from the original Monty Hall problem, where originally choosing the "bad goat" WAS a possibility. So the possibilities (potentially) have changed. You have to be careful, solving it "in terms of the old problem". It isn't the old problem any more.
Are you trying to say something? If so, what is it?
Here's a possible repeated game: FIRST, Monty opens a door with a bad goat. THEN you choose from the remaining two doors. (One has a good goat, one has a car.) THEN Monty offers you the chance to switch. Do you agree, in this game, that the odds are 50/50, whether you switch or not? You could play it 1000 times, and you'll get the car 50%, and the good goat 50%, whether you switch or not.
You want to say that the game that we are talking about is different than the game in my previous paragraph. Different how, exactly? Here's one way it might be different: if you played the original Monty Hall problem, and your first choice was the bad goat, then the scenario described CANNOT happen. (Monty doesn't have a bad goat to show you.) So it must be clear that you cannot be playing the original game. The remaining question is: if you choose the car first, might Monty have shown you the good goat then? According to the puzzle, he didn't. But ... "could" he have? What does that actually mean? "Could" you have chosen the bad goat first? It seems, somehow, that you "couldn't" have chosen the bad goat ... yet Monty "could" have chosen the good goat (if you chose the car).
What exactly is the difference in analysis, between the "couldn't" (you first choosing the bad goat), and the "could" (Monty choosing the good goat when you choose the car first)? According to the scenario, neither one happened. Why is one still "possible", somehow, but the other isn't?
This only matters if you believe that the optimal strategy in the *original* Monty Hall problem depends on the quality of the goat behind the opened door.
To put it differently, suppose that we are in the scenario with the eccentric billionaire's goat, but you are philosophically opposed to helping billionaires retrieve their lost pets, and so both goats are equally worthless to you: you want the car. When Monty opens the door and you see the bad goat behind it, that's vaguely interesting but doesn't affect you in any way: by the standard Monty Hall strategy, you have a 2/3 chance of getting the car if you switch, and a 1/3 strategy of getting the (billionaire's pet) goat.
Similarly, there is a different world, in which Monty opens the door with the billionaire's pet goat behind it. Again, that's vaguely interesting but doesn't affect you in any way: by the standard Monty Hall strategy, you have a 2/3 chance of getting the car if you switch, and a 1/3 strategy of getting the (bat) goat.
The problem is that your last paragraph didn't happen, by assumption in the new scenario. So it matters a lot about why it didn't happen. It depends a bit on when you eliminate those possible worlds. If you do it before you analyze the scenario at all (originally choosing the bad goat can never be part of this new game), then it's only 50/50 for the remaining choices.
You're basically making an analysis based on a counterfactual that didn't happen (you choose the bad goat, and Monty shows the good goat). You're assigning it some weight, some probability that it "might have" happened -- even though we know it didn't.
That's where the disagreement is. With a single iteration, there aren't any probabilities at all. You either get the goat, or the car, 100%. Probabilities are about your state of knowledge, and they can only be verified by a repeated experiment where you set up some scenario over and over again, and then count the various outcomes.
What is the scenario that is being set up over and over again, in this case? If I run the example 1000 times, what are the 1000 examples? Does Monty show the good goat in some of those 1000 runs? Or not?
You need an "example generation machine", that can generate scenarios with outcomes you can count. This new problem is a little bit ambiguous about what the space of possible games is.
We're assuming that Monty doesn't know anything about the goats, so he is equally likely to choose either goat. Therefore the two scenarios are equally likely, and more importantly P(your door has car | Monty picks good goat) = P(your door has car | Monty picks bad goat).
I agree that if we don't assume this, then the answer to the entire problem is different. But I think it's unambiguous that we should assume it.
Yes, I think you're probably right.
The difference is that the host opens a GOAT door, chosen uniformly at random from the two goats. The uniform part is the active ingredient here. In the original show, the two goats were indistinguishable, but if the host opens a goat at random then the two are still indistinguishable TO THE HOST, although not to you.
The thing about the car is that the host never opens a car door (they know where the car is after all) which gives you the 2/3 probability of the prize you really want which is guaranteed not to be a goat in the original case.
The revised problem as stated (you've already seen a goat but not the one you want) has cut off the branch in the probability tree where you get the good goat, so you're conditioning on seeing the other one.
If you switch, you still get probability 2/3 of a car. Therefore if you don't switch, you get probability 2/3 of "a goat" - but in this case you know it's the goat you want.
We can just compute the answer with Bayes' theorem.
P(you picked good goat | bad goat revealed)
= P(bad goat revealed | you have good goat) P(you picked good goat) / P(bad goat revealed)
= 1 (1/3) / (1/2) = 2/3
On the Monty Hall one - good twist!
I've often thought of this modification:
You pick a door. Monty randomly opens a door you didn't pick (not worrying about whether it has a goat or a car). He happens to reveal a door with a goat. Now what should you do?
Before you saw what you revealed, there's a 1/3 chance your first pick was the car (and he definitely reveals a goat) and a 1/3 chance your first pick was a goat and he reveals a goat, and a 1/3 chance your first pick was a goat and he reveals the car.
We can eliminate the last one, now that we've seen the goat, so it's 50/50.
It's interesting that it's different again when he definitely chooses not to reveal the car, not knowing that one of the goats is the prize you want!
How do you feel about the original Monty Hall problem? If we apply the same chart but instead want the car instead of either G or B, then stay/switch is also 50/50, right?
The lesson I take is from both the original and this one is that probability is not about it's about information. Think of these as a Bayesian. (In fact, a straightforward application of Bayes Theorem really shows it here.)
In this case, when you get lucky enough to see the bad goat, that tells you that you probably picked the good goat. Worlds in which Monty reveals the bad goat are simply less common when the bad goat is not the one you chose.
Rrr, on that last line... when the *good* goat is not the one you chose.
This is what I was thinking (50/50 at the end), but I wrote a simulation that says you should only switch 1/3 of the time at the end: https://pastebin.com/YtvBWpc5
Not sure yet what the problem is.
Monty is randomly sampling the goat doors remaining, so revealing Smith gives you Bayesian information about the distribution of remaining goats.
There are three prizes:
Rosebud, the billionaire's pet goat
Smith, a regular goat
Herbie, a car
P(Monty shows Smith | you chose Herbie) = 50%, since there are two goat doors for Monty to choose between.
P(Monty shows Smith | you chose Rosebud) = 100%, since Smith is the only goat available for Monty to show.
P(Monty shows Smith | you chose Smith) = 0%, since Rosebud is the only goat available for Monty to show.
So Monty actually showing Smith should have you update your estimate of P(you chose Rosebud) from 33.3% to 66.7%.
Another way of working the problem that works out the same is to reduce it to the already-known solution to the standard Monty Hall problem, that there is a 2/3 chance that the car is behind the last door and a 1/3 chance the goat is there. Your door has a 2/3 chance of goat and a 1/3 chance of car. Since the still-hidden goat is Rosebud, you should stay with your door to maximize your chance of finding him.
Possible scenarios:
Your door is the good goat 1/3
Your door is the bad goat, monty shows you the good goat(he wont show you the car) and you lose immediately 1/3
Your door is the car 50/50 monty shows you the good goat 1/6
Your door is the car monty shows the bad goat. 1/6
The switch strategy has 1/6 winning. The keep strategy has 1/3 winning. 1/2 you lose immediately. Conditional on seeing the bad goat, you have 2/3 chance winning by not switching.
One of those is not playable, Monty showing you the good goat when you pick the car.
2 scenarios are not winnable. Both involve monty showing you the good goat. In those cases you switch to have a maximum chance to win the car.
This feels bizarre to me. Why would the hosts intentions matter? But I think your reasoning is correct.
When you see new information, it's not enough just to eliminate impossible options.
You also have to take into account, that the options that made the new evidence more likely, are themselves more likely than options that had a lesser chance of showing you that information.
Having the host open a goat door is more likely if you originally chose a car door, than if you chose a goat door. Which bumps the probability that you originally chose a car door from 1/3 to 1/2.
It's not precisely that the host's *intentions* matter, but rather that their *algorithm* matters. In order to understand the significance of the information the host reveals, you need to understand what *could* have been revealed. Since the host's actions depend on what door you originally picked, you can't calculate the counterfactual observations without knowing the host's decision algorithm.
For example, suppose the hosts acts like this:
- If your original pick was the car, the host reveals a goat and asks if you want to switch
- If your original pick was a goat, the host treats that as your final choice and does not give you the opportunity to switch
In this case, if the host reveals a goat, then there is 100% chance that you already picked the car, and a 0% chance that you will get the car if you switch.
You can make it even more extreme. Say that the host says, “I will open the lowest-number door that is not the one you pick, and not the one with the car”. If you pick 1 and the host opens 3, it’s obvious that 2 has the car. But if you pick 1 and the host opens 2, then you’re now 50/50.
> You also have to take into account, that the options that made the new evidence more likely, are themselves more likely than options that had a lesser chance of showing you that information.
That's true, but you're not doing it.
> Having the host open a goat door is more likely if you originally chose a car door, than if you chose a goat door.
That's false. By definition, the probability that the host opens a goat door is 100% in all cases.
??? The comment I responded to modifies the usual problem, to allow the host to open a car door if you didn't choose it.
I apologize. It's a long thread.
I just want to commend the OP for rephrasing to at least implicitly specify the host's decision algorithm, The host opens a door showing a goat "as is traditional", meaning always or nearly always. This rules out algorithms like "Host shows you a goat and offers a chance to switch if you picked the car, but doesn't open any door or give you a chance to switch, because cars are expensive and goats are cheap". Or, "Host decides what to do based on what he feels the audience would most enjoy, and since you're a telegenic and enthusiastic young woman he thinks they'd be happiest if he nudged you towards switching your choice to the car (which isn't being paid out of *his* salary)".
The original Monty Hall problem, what divided the intertubes back in the day, only specified that the host revealed a goat and offered a switch on this one occasion, and much of the dispute was based on differing assumptions as to the algorithm.
And, of course, this was when Monty Hall was still alive and could tell you what the algorithm was if you asked. It was not, "host always opens a door to reveal a goat".
Agreed, I've always felt the problem tells us more about how easy it is make, rely on, and be confused by unspoken assumptions, than it does about probability.
As Jeffrey's says in the Theory Of Probability:
"The most beneficial result that I can hope for as a consequence of this work is that more attention will be paid to the precise statement of the alternatives involved in the questions asked. It is sometimes considered a paradox that the answer depends not only on the observations but on the question: it should be a platitude."
Cool. I didnt realize "the monty hall problem" was not in fact a regularly occuring game on the show. Never watched it. History has been rewritten by this brain teaser. Everyone will think, as I did, that was the whole show
> The original Monty Hall problem, what divided the intertubes back in the day, only specified that the host revealed a goat and offered a switch on this one occasion, and much of the dispute was based on differing assumptions as to the algorithm.
This is just you not knowing what you're talking about. The problem is older than popular awareness of the internet. It divided the letter-writing public.
https://en.wikipedia.org/wiki/Marilyn_vos_Savant#Monty_Hall_problem
> Savant was asked the following question in her September 9, 1990, column:
> Suppose you're on a game show, and you're given the choice of three doors. Behind one door is a car, behind the others, goats. You pick a door, say #1, and the host, who knows what's behind the doors, opens another door, say #3, which has a goat. He says to you, "Do you want to pick door #2?" Is it to your advantage to switch your choice of doors?
> This question is called the Monty Hall problem due to its resembling scenarios on the game show Let's Make a Deal, hosted by Monty Hall. It was a known logic problem before it was used in "Ask Marilyn".
> If the host merely selects a door at random, the question is likewise very different from the standard version. Savant addressed these issues by writing the following in Parade magazine, "the original answer defines certain conditions, the most significant of which is that the host always opens a losing door on purpose. Anything else is a different question."
And her second followup pointed out:
>> We've received thousands of letters, and of the people who performed the experiment by hand as described, the results are close to unanimous: you win twice as often when you change doors. Nearly 100% of those readers now believe it pays to switch. (One is an eighth-grade math teacher who, despite data clearly supporting the position, simply refuses to believe it!)
>> But many people tried performing similar experiments on computers, fearlessly programming them in hundreds of different ways. Not surprisingly, they fared a little less well. Even so, about 97% of them now believe it pays to switch.
>> And a very small percentage of readers feel convinced that the furor is resulting from people not realizing that the host is opening a losing door on purpose. (But they haven't read my mail! The great majority of people understand the conditions perfectly.)
( https://web.archive.org/web/20100310140547/http://www.marilynvossavant.com/articles/gameshow.html )
Isn't this version of the problem simpler than the original one? You've distinguished the goats, so now there are just 3 doors with 3 prizes. You've already been told that Monty opens a door to reveal a prize you're not interested in so you should stick to win 2/3 of the time. (Because, as in the classic Monty Hall problem, the car is behind the third door 2/3 of the time.)
Your point is absolutely right though - a lot of the confusion on these problems hinges on Monty's behaviour not being completely specified. I think in your scenario the key information is that Monty doesn't know/care where the car is, so in opening a door he's not giving you any information about where the car is or isn't. You can swap the order of events: Monty reveals a goat, you choose from the remaining doors, and have a 50/50 chance of winning. Conversely if Monty avoids the car, he reveals information and improves your chances.
> You've already been told that Monty opens a door to reveal a prize you're not interested in so you should stick to win 2/3 of the time.
No, the opposite. This is the same as the original problem, with no differences except in which prize you want. Monty opens a door to reveal a goat. What he reveals is a prize you don't want, but that's not why he opened the door.
The results are identical to the original problem, because it's the same problem. After Monty opens the one door, the final door has a car 2/3 of the time, and your door has a car 1/3 of the time. Since you don't want the car, you don't switch.
Bayes theorem is the easiest way to solve all the variants.
For your variant (door is opened randomly):
P(you pick car | goat revealed) = P(goat revealed | you pick car) P(you pick car) / P(goat revealed)
= 1 (1/3) / (2/3)
= 1/2
I don’t know if non-Anglo countries also believe this but the idea that being cold gives you a cold is very widely held here despite people also knowing that it’s transmitted by a virus. I know there’s some research about seasonality etc but a lot of people seem to go way beyond that
I can confirm that it's a common folk belief in India as well.
In a similar vein, do non-Americans believe that vitamin C helps with colds?
That's a fairly common belief here in the UK.
Also in Spain.
That's not entirely wrong though - you are more susceptible to diseases when you're cold. So I don't think that's equivalent.
…the cold water thing isn’t entirely wrong either though?- drinking the local tap water without boiling it in places where that belief is prevalent carries a risk of making you ill? It’s lumping the fan thing in there that seems unfair - that’s the belief that isn’t like the others.
This.
There are two stable equilibria. If everyone believes that drinking non-boiled tap water is safe, then you want regulations in place regarding the amount of bacteria in the water, so it is indeed safe.
If nobody thinks it is safe, then nobody cares and it will probably be somewhat unsafe by default.
Well, my understanding is that Chinese tap water contains a lot more heavy metal than you might hope, which boiling won't help with.
(Source: am Chinese, but don't have a scholarly understanding of this or anything) I do think it's more than just believing that the water needs to be boiled in order to be safe. Like if you boiled water, let it cool, and added ice to it, many Chinese people would probably think that it would hurt them to drink it. Like it will affect your digestion or fertility.
I think it's more likely that the belief is due to the fact that being cold can make your nose run.
I thought it was that being cold can put your immune system down a level making you more susceptible to the actual cause of the sickness.
But for me being cold definitely makes my nose run which feels like having a mild cold.
I suspect that the second thing you mentioned is the real origin of the belief that cold causes colds, and the first was something that people came up with afterward.
I think it's got a lot more to do with the fact that people get colds at the stat of winter. (Because the folk belief is true, at least in certain ways.)
What makes your nose run is the counter-current-heat-exchange-system that helps you avoid wasting lung water. The air you exhale is warm and full of water; in the nose it meets cold blood through very thin nose walls, cooling down and making water to gather into droplets. Some of those run out of your nose, as a side effect. The system works more strongly when the air (and the nose) are colder. The nose running when it's cold is not a disease, the nose is not producing more mucus. Heat exchange systems like this are common among mammals.
This in interesting but are you sure? Would the nose stop running (would the nose stop, haha) if you started breathing through your mouth? Or if you either inhaled or exhaled through your mouth consistently? If this is true, it would have to stop (maybe after drying the nostrils). I may test that if it gets cold again.
It's standard animal physiology. You can test if your nose only starts running outdoors, but not indoors (it must be actually warm indoors; some people keep 16 centigrade at home in winter, this can make the nose run). If it also runs at a warm temperature, you probably have an infection, allergy, or other irritation in the nose, producing extra mucus.
Good to know. I think you missed part of my point: the body doesn't have a reserve of cold blood, right? If the nose is cold, it's because the nose is cold. So within a couple minutes of coming indoors, the dehumidifier effect should stop. And give it a few minutes more for the nasal passages to drip a bit. Though to know how long this takes, you'd have to get a reference measurement: use some nasal spray and time how long you nose takes you dry.
Yeah, I don't know exactly how long it will keep dripping after coming indoors. Probably it's also slightly different person-by-person. Some have naturally colder noses even indoors, etc.
I don't know about the US, but in Europe people believe that it's ok to blow your nose when you have a cold. Instead of swallowing it or spitting it on the street. In medical terms, that's gross.
In societal terms it’s less gross
Spitting it in the street is *less* gross? What?
We find it gross because our parents have told us that it's gross (unless we do it on the soccer place, if you live in Europe).
Mucus on the street will not infect anyone. It does not find its way to the faces of people, and the street is a terrible place for germs to survive.
Asian people don't mind anyone spitting on the street, but they find the thought abhorrent to blow mucus into a handkerchief, put this into your pocket where it has perfect breeding conditions, touch it with your hand and then perhaps you even want to shake hands with them. From hygienic perspective, they are right that each of these steps is gross.
In the U.S., standard practice is to blow your nose into tissue paper made for that purpose and then throw it in the trash. And, ideally, washing your hands afterward.
I'll defer to LKY on this one:
https://youtu.be/xQbLo-Kt0W4?si=WmmsqhPRXWBowmH9
Not all mucus can be sucked from your nose into your mouth so you can spit it on the street (not gross at all :) )
This could also be related to the Asian belief that shoes worn outside are so absolutely disgusting that they can't be tolerated inside, not even for a moment, not even for visitors. If the streets are expected to be covered with other people's snot then this makes more sense.
> If the streets are expected to be covered with other people's snot then this makes more sense.
Not really; an American sidewalk isn't expected to see a lot of mucus, but it 𝗶𝘀 expected to see a lot of dog poop, which is much worse.
I like to spit into grass or bushes, personally.
So, Asia is a big place (cit?). In Japan people do not spit anywhere. On top of that a common advertising tactic in winter is to hand out tissues with ads on them; so people are doing something with those tissues. I'm pretty sure spitting is illegal in Singapore.
If your thinking about China only, then there are reasons outside of cleanliness to consider, like weird hangovers from the cultural revolution.
Most Germanic languages call the disease something to do with low temperature. Also Greek, at least some Slavic languages and some Romance languages, and the Japanese word for the disease (kaze) means 'wind'. The Korean word isn't related to weather, but they also believe that you get it from the cold. So this seems pretty common for people living in places that get cold.
That was the first thing I thought of too.
I believe the British version of fan death is "non-bio" (i.e. enzyme-free) laundry powder - lots of people here believe that the enzymes in "bio" washing powder cause them skin problems, and every supermarket has both "bio" and "non-bio" washing powder for sale (if anything, the non-bio stuff is more common). It's my understanding that this is total nonsense, even for babies, and that in most countries non-bio washing powder is either unavailable or only available in kooky health stores. Can non-Brits confirm?
I lived in the UK for my first 30 years. And the thing we were taught at school is that the enzymes in "bio" got into the waste water and were somehow worse for the environment. I'd never heard the skin problem angle.
But now you've said it - it seems weird to believe that bio washing powder is extra polluting, but that it shouldn't be banned - given how much Brits love banning stuff.
Huh, I've never heard that. I've heard that phosphates in detergent can cause environmental problems, but according to Wikipedia that's true (and led to them being banned in the EU and US): https://en.wikipedia.org/wiki/Phosphates_in_detergent
I'm a non-Brit (from the U.S.), and I've never heard of this.
Thanks!
Must admit, the non-bio thing is something I've always believed and am quite shocked to find that its about as scientifically accurate as Fan Death.
I mean, at least it's plausible though. The idea that certain unspecified chemicals rubbing up against your skin all day can cause skin problems sounds pretty reasonable to me as an educated non-specialist, whereas it's hard to come up with a plausible mechanism for fan death.
Some people do have allergies to specific detergents, including "bio" ones. So this is a real thing, the only question is how prevalent it is. I do know of one case in my extended family where a child got a rash after the family, which typically uses non-bio, accidentally bought bio detergent from the same brand (the other children in the family were not affected).
In my experience it’s less the type of detergent and more to do with how well it has been rinsed from the clothes. Any soap will irritate the skin if not properly rinsed.
There are at least 2 theories why being cold would increase the risk of infection:
- the body is shifting it‘s energy expenditure away from keeping up the immune system and towards heat production, as well as reducing the bloodflow to external surfaces like the mucosal membranes in your nose and thereby also reducing the effectiveness of your immune response.
- typical cold viruses are evolutionary optimized to reproduce in the upper air tracts instead of the lower parts or even the lungs. This makes an infection much easier on the body so you can still run around and infect a lot of other people, thereby increasing the fitness of the virus. The virus recognises where it is (lung or nose) via the surrounding temperature - if it‘s cold, it must be in the nose, so „low temperature“ is its signal to start reproducing.
We are constantly in contact with all kinds of viruses anyway, at least the common cold ones. Whether you actually get sick is more of a question of how the tug of war between germs and immune system ends. I also think i remember reading about experiments where researchers could influence the susceptibility to catching a cold by cooling down the feet of their test subjects while smearing cold viruses into their noses.
The medical literature on this isn’t certain. Mostly they will say that cold isn’t a cause of a cold but agree the immune system could be compromised.
I think that this is one case where old wives tales are probably true, despite the medical lack of consensus. Remember that the normal temperature in the 19C (and presumably before) was 1C higher than today, so they were always fighting something, getting very cold or wet would then cause the sickness to become visible.
You need to be pretty darn cold for the immune system to get compromised, for the most part when you're regularly cold, you just consume more kcals, that's it
So why are colds so strongly seasonal?
If I recall, the reason is that cold air is much more dry, so the membranes in your nose produce more moisture to avoid getting all dried out and frozen, but this moisture also makes it easier for airborne respiratory viruses to germinate (wrong term, I know, but you get my point) and create a local infection.
Wouldn't that also make colds spread more easily in hot desert environments where it's also dry?
Good question. Not sure, but maybe the heat and stronger sunlight cause virus particles to break down more quickly, like Covid. The cold seems to be more potent than just the dryness, too. My nose starts running pretty quickly when the temperature drops below 40F. Not so on trips to dry warm places like California or Nevada. I just get thirsty faster.
Consensus is the two main reasons are:
- human crowding
- longer lived viruses
Also, these don't even have to be intrinsically important. I'll explain; because epidemics are cyclical in nature, and people once infected become immune for a while, even if a given season (here, winter) has only a small advantage in virus replication , it may turn out that it gets a very large share of all epidemics.
I still wouldn’t throw a bucket of water over an already wet child.
I don't recall any of the polar explorers to ever have caught cold while in the iced lands, and writing about it. There have to be other vectors of infection around you for the disease to get to you.
(I know that cold is infectious. But, in theory, it could be your own germs that colonize the respiratory tract suddenly going haywire.)
Is there useful data on this in either direction? I can't be a false folk belief until it has been studied and debunked.
It is almost certainly true that actually being cold ever so slightly increases your odds of getting sick as it does divert energy from other systems to more rapid heat generation.
We still call one of the other illnesses you tend to get in the winter (in)flu(enza) because someone once thought it was the influence of the planets.
There's definitely some seasonality going on, to the point that we get our flu vaccines in early autumn.
Steven Pinker recently linked to evidence that cold undermines the immune system in the nasal passages, making people more susceptible to viruses. The old studies that supposedly “debunked” this “myth” put people in cold rooms with no exposure to viruses (I hear second hand), which isn’t a charitable interpretation of the folk belief. Yes, the cold doesn’t literally give you a cold, but yes, it increases the chances of you contracting a virus.
I was very paranoid during Covid, avoiding situations with people indoors. (The risk or catching respiratory infection outdoors is dramatically lower). I occassionally ate outdoors with my small children in winter or damp weather. They did not catch cold from that.
I still am not entirely convinced that being cold doesn't cause *some* kind of cold-like illness, even if it's not a cold in particular, considering the time I got a cold almost immediately after breathing too much cold air during a morning track practice (as in it was starting to hurt my throat and lungs); or the time that I stood waiting for the bus for an hour in -20℃ weather, and that very night I got a fever for the first time in years and slept like the dead for 15 hours. It would take a lot to convince me that the cold didn't play some more or less direct role in causing those illnesses. (Although I'm certainly open to the possibility that they weren't caused by pathogens.)
Re: Turkey and fertility - I would think that Turkey is *more* correlated with western social trends than China or Korea or India are (being almost in Europe), even if less than Latin America and Eastern Europe. The fact that fertility is falling in *all* these areas suggests this isn't about westernness at all - it's probably about some broader techno-economic trends (e.g., various things that make the difficulty of achieving a child-free or a child-ful lifestyle harder or easier, or things that make the appeal of a child-free or a child-ful lifestyle greater or lesser, or things that make people more choosy before they settle down with a partner, or whatever - better TV and smartphones and GPS-based hookup apps all seem like they would cross cultural borders).
It should be noted that Turkey implemented universal suffrage in 1934. They're not that divorced from western trends.
Pretty much every country in the world has a TFR graph that looks like this \. Some have a steeper slope, some have it more gradual but its declining pretty much everywhere. Only Central Asia seems to be an exception but no idea why. Don't think it Islam though. Central Asians are on the lower end of Islamic religiousity spectrum. Even the highly religious Muslim countries like Afghanistan, Niger, Yemen all have declining TFRs though at a more slower rate.
I don’t think that gives an explanation of why TFR has been falling in most poor countries, even as it has risen in Kazakhstan and Uzbekistan. That might explain why all these countries are higher than more middle income countries, but not the direction of change.
TFR is falling everywhere now because of microplastics, which can now be found all over the world. I don't have a citation for this, because I'm just making it up now.
That’s interesting - I hadn’t realize that TFR was actually increasing in Kazakhstan and Uzbekistan! But Tajikistan and Turkmenistan seem to have turned down again after an increase that started at the same time as those others. And I’m surprised that Afghanistan has been decreasing as fast as Pakistan and India - I had thought they were staying stubbornly high for a long time!
https://ourworldindata.org/fertility-rate
The increase in both Kazakhstan and Uzbekistan is interesting, because they've been on divergent paths culturally - Uzbekistan has really started leaning more into Arabic-style Islamic cultural stuff, while Kazakhstan is still really secular IIRC (it helps that it has a much better economy).
Gives me hope in a weird sort of way, the possibility that it's women's rights what brings fertility rates down is concerning to me.
It's not, they are just as down in Saudi Arabia
There is some progress on women rights in Saudi Arabia, though.
TFR is down basically everywhere, even in places where women's labor force participation and rights both on paper and in practice are quite limited.
I am not a stats guy so don't know if this makes sense. But I read a while ago that its caused by the emigration of Russians from Central Asia. Basically before 1990, Russians constituted a large share of the Central Asian population. They had a much lower fertility rate than the locals. In the 90s and 2000s, Russians began to leave Central Asia, and as their proportion dropped, it artificially increased the TFR of Central Asian countries as a whole even though the locals TFR was also gradually decreasing. Tajikistan and Turkmenistan have almost reached the floor of Russian population, so now their TFR has started to decline again. I didn't find anything else that talked about this as a factor so don't know if it is true.
The start of Afghanistan's decline in TFR coincides with America occupying the country and imposing a revolution in gender roles- we'll have to see if it ends up being a permanent change now that the Americans are gone. I hope so.
https://www.macrotrends.net/global-metrics/countries/AFG/afghanistan/fertility-rate
Its been more than 3 years since the Taliban took over the country, and the TFR continued to decline in 2022, 2023, and 2024. Taliban could take away every single human right from women and I think the TFR would still decline. Because even the most conservative Islamist nutjob man still would have to pay for all the children his wife/wives gives birth too. The wife can't work and if you have like 10 children, who is going to feed them all? Its not just food. He still knows nice clothes, household goods, things in general exist elsewhere and if he has one less child, he could maybe have slightly better quality of life. Modernity spares no one.
I think it's a combination of declining childhood mortality rates and reliable contraception. It's easier to manage your number of children and much fewer of them are dying before adulthood - that's why fertility decline is the slowest in ultra-poor parts of subsaharan Africa.
I think Kazakhstan's TFR rose because the low-TFR Europeans (mostly Russians, Germans, and some Ukrainians) emigrated after the end of communism, and the remaining population was Central Asian Muslims who already had somewhat higher TFR. it's a selection effect. Kazakhstan was around 7% German at one point.
Central Asian countries aren't actually that 'poor' by global standards, except I think Tajikistan.
re. specifically the "economic trends" side: Turkey might be a bit of special case because of just how poorly its economy is being managed. In the time range shown (2016 - 2023), the Turkish Lira lost about 85% of its value relative to the USD, current inflation is ~50% and unemployment looks like ~10%.
To the extent that fertility is affected by difficulty to financially stable adult, becoming financially stable young adult in Turkey seems particularly difficult.
I know there are lots of Turks who immigrated to Germany (not Greece, too much bad blood I assume), originally as guest-workers but they tended to stay. They've worked out better than many other immigrants from Islamic countries in Europe, perhaps because Turkey was ruled by secularists when that first wave came over, or perhaps Turkey being different caused it to be ruled by secularists. At any rate, there are clear differences between Turkish vs North African immigrants in Brussels.
Turkish immigrants to Germany came from the most backwards and Islamist parts of the country. They are big supporters of Erdogan despite many being 3rd generation at this point. They only seem better due to the mass migration of even more religious Muslims from 2010s onwards.
I don't think that this has to do with religion. Yes, North African immigrants cause a lot more troubles than Turkish immigrants, but Syrian immigrants also cause very little trouble despite being religious.
The Turkish immigration waves were decades before the ones from North Africa (at least for Germany), so those are very different demographics. Perhaps a bit closer in terms of demographics are the Romanian immigrants from late 90s and early 00s, and I think those were the trouble-makers of their time, despite being orthodox Christians.
I think the more important aspect is that immigrants from North Africa (and from Romania at that time) come in big family clans which act like mafia and try to build criminal networks, while Syrian refugees tend to have individual backgrounds.
> Syrian immigrants also cause very little trouble despite being religious
That's not what I heard. But I also heard many "refugees" lied about being Syrian when the decision was made to let lots of Syrians in.
Hum? I'm surprised by this, I hear this for the first time. Are you talking about Germany?
I think it has been consistent over years that refugees from Syria commit few crimes in Germany, much less than refugees from other countries, and massively less than immigrants from Maghreb. They still commit somewhat more crimes than the average population, but most of this is demographics: most of them are young men, and their crime level is not much above the level of other young men in Germany.
I googled for "Kriminalität von syrischen Flüchtlingen", and the first 2-3 hits seem to confirm this (links are in German).
https://www.tagesspiegel.de/politik/gefluchtete-und-kriminalitat-was-hinter-den-zahlen-steckt-9602333.html
https://www.tagesspiegel.de/politik/fluchtlinge-aus-syrien-seltener-kriminell-als-andere-zuwanderer-4581922.html
I also found the report of the BKA (German national crime authority) from 2023. It mentions several problematic nations of origin (mainly Maghreb+Georgia, for some specific crimes also Gambia, Nigeria, Somalia, Albania, Kosovo and Serbia), but they don't mention Syria.
My recollection was that it was a big story that lots of women got assaulted shortly after Merkel let all those refugees in.
I guess you mean the sexual assaults during New Year's Eve in Cologne. Those were African immigrants, as far as we can tell. At least among the suspects that could be identified. "Two-thirds were originally from Morocco or Algeria."
"Most of those 120 (identified suspects) had come from North Africa."
From: https://en.wikipedia.org/wiki/2015%E2%80%9316_New_Year%27s_Eve_sexual_assaults_in_Germany
Yes, just like it’s big news that Los Angeles has fires because of the delta smelt. Stories get slightly modified (whether intentionally or accidentally) and when a version taps into the zeitgeist, it goes viral, even if it’s not right.
"The fact that fertility is falling in *all* these areas suggests this isn't about westernness at all"
No it doesn't. The medium through which the ideological pathogen that causes low fertility (Progressivism, specficially its feminism tentacle) spread is *academia*, which is the world's only near-global monoculture. It originated in the West, even though it has now spread beyond it.
I would have thought that Hollywood and Bollywood were more global than academia, which is still relatively limited in its reach even in Europe and North America, where barely half of people have any interaction with it.
Given how deeply has fertility fallen in places like rural Serbia and Russia, there must be other factors at play. Not many Tumblr-feminist academic types in Kragujevac or Tula.
Despite the moral panic about Erdogen coastal and urban Turkey is like Mediterranean Europe.
Here is a visualization of the Monty Hall solution, I agree that you should stay. https://i.imgur.com/CAkZjIX.png
You have BAD GOOD CAR listed twice
Because 2/3 of the time you are shown the bad goat, it's because you picked the good one
Thanks! I did mine the other way, with doors 1-2-3.
What, like a Range Rover?
On ChatGPT and the environment, my calculation is simple. I figure that anything that releases CO2 costs money, and anything that wastes water costs money. My understanding is that OpenAI is currently operating at -200% profitability, so that they are spending "only" three times as much money as they are taking in. So if I'm spending $20 a month on them, then the maximum amount of pollution I can be causing is like $60 of gasoline or water use. Not great, but not as bad as some other things I do. And a free customer can't be causing too much at all.
This is a clever way to get a rough upper bound.
To add to this: through their API they charge $2.50 for 1 million input tokens on ChatGPT 4o. 1M tokens is roughly the length of the series A Song of Ice and Fire. Far more than I can imagine most people using in a month, even accounting for how your conversation is repeatedly resubmitted.
I'd also guess their free users outnumber their paid users by a huge margin and that's part of the reason for their low profitability (along with capex and training ever larger models). The inference itself is probably fairly cheap per query.
Money ≈ impact is a reasonable order of magnitude ballpark, but definitely with an emphasis on "order of magnitude". IANA expert but from what few LCAs I've done, the "kg CO₂ eq. per USD" ratio hovers around 0.5 kg/$ for commodities (metals, wood), but can be an order of magnitude or more lower for end products (phones, dishwashers), or nearly an order of magnitude higher for direct emitters (concrete, petrol). So in terms of climate impact, $20 spent on one thing might be closer to $2 or $200 spent on another.
Definitely true. But it’s more that I have an upper bound to sanity check claims about how environmentally damaging AI products are. If I’m spending $20 a month on ChatGPT and $20 a month on Claude, I’m at least pretty confident that my impact on the environment is less than spending $40 a month on gasoline - not necessarily great, but not the hugely destructive thing people sometimes make it out to be.
Aye, I think that's a good framing of it
Isn't the whole concern about CO2 emissions that they are an externality whose harm is not factored into profitability calculations? That said, it is unlikely that the harm they do is *vastly* higher than the profit they provide for.
Yeah, I’m not saying that the harm of $20 of gasoline emissions is equal to $20 of harm. But I think it’s plausible that whatever harm that is, it’s an upper bound on the harm that is done by $20 of ChatGPT, because at least some of the money on ChatGPT is spent on things other than burning carbon.
Re #39 and the morality of ozempic vs willpower - doesn't it matter that there are a lot of people who are thin with no application of willpower, and only a few people who are thin due to heavy application of willpower?
I know a person who's naturally thin and is weirded out by people who think his thinness is from moral superiority.
It's because of the other side - being fat is still a sin. In the old days it was the sin of Gluttony. Now we've done away with sins, but there are still things we disapprove of, and being fat is one of them.
So (speaking as a Person of Amplitude) you do get the social, media, and medical harrumphing about vice and lack of willpower and laziness and stupidity and all that good stuff, often packaged in "but we just *care* about you and your health" but increasingly with no nice packaging; the obesity epidemic that costs the economy so much and burdens the already over-stretched health service means being fat is Evil and fat people are Evil.
So you need to scold and scorn fat people. No willpower? No self-discipline? Look at these people who are thin, they can manage to control their base appetites, why don't you emulate them, you sinner?
So if you get a thin person who says "No, I'm thin just because", that undercuts the moral element. What? No sacrifice, no self-discipline, no moral superiority? That means that maybe, if you can be thin 'just because', then maybe you can be fat 'just because' (yes, I know - "eat less, exercise more", "calories in, calories out" and all the rest of it).
But maybe for some people it *is* harder, and not because they're lazy and stupid and greedy and Evil. Maybe fatness is not a question of moral superiority after all?
No, this cannot be, because that would leave fat people off the hook! They'd take it as an excuse! So naturally thin people *must* have a story of only eating one leaf of lettuce and a glass of water for lunch, then running for twenty miles to burn off the calories!
> So if you get a thin person who says "No, I'm thin just because", that undercuts the moral element. What? No sacrifice, no self-discipline, no moral superiority? That means that maybe, if you can be thin 'just because', then maybe you can be fat 'just because' (yes, I know - "eat less, exercise more", "calories in, calories out" and all the rest of it).
As someone who stays thin with little effort, I'm inclined towards biological explanations of obesity for exactly that reason.
I'm naturally thin and roll my eyes at the people who think it's a matter of moral superiority, since it clearly isn't in my case.
To the extent people comment about it, everyone assumes I must be thin due to "working out" or "dieting", rather than just...genetically predisposed to not put weight on easily. I've learned not to press that point, cause it seems to make people rather uncomfortable. The same with insisting "no, actually I eat whatever and whenever, feel no compulsion to snack, and genuinely dislike sweet stuff". They'll keep poking for a sinful caveat (which is thankfully easy to give, saying you like salty/savoury is a good cover, even if I mean shellfish and they mean potato chips). It just seems like a very alien concept to most people, that there are ordered individuals who don't suffer from pitfalls like cravings or overeating, and mostly didn't have to apply heroic-level efforts to get there. I think that's why GLP-1s bother such folks: it feels like cheating, cause Everybody Knows you can't be thin in current_year without major sacrifice, divine intervention, lots of struggle, etc. Systemic forces and an obeisogenic environment, defeated by an artificial pill from Big Pharma? That's the master's tools, man...to misquote Adam Serwer, the suffering is the point, when it comes to weight loss.
(I also think people mistake good habits for effortful willpower...Father taught me to brew my own cuppa at home when I was like 10, so that's how I've always done coffee, and it's perfectly satisfactory decades later. Hundreds of dollars and thousands of calories avoided yearly by not getting premium mediocre swill at Starbucks on the regular. Generalize and solve for the equilibrium - it's way easier to not dig a hole in the first place than to climb out of a pit later. Avoid temptation, don't fight it.)
>I also think people mistake good habits for effortful willpower.
Well, acquiring good habits can certainly take plenty of willpower, particularly if your circumstances make that difficult/inconvenient. Maintaining them once acquired is much easier, and I agree that healthy lifestyle propaganda should focus on that more.
Try explaining them that you in fact like to gain weight and, no, you are not starving yourself and watch heads explode (even doctors squirm).
While genetics do play some role I think habits are more important than most are aware. My parents, grandparents, siblings, aunts, uncles, and cousins all grew a spare tire around age 20-30. From my genetics, one might expect I would do so as well. But luckily at 18 I went to a fancy college where by comparison food was cheap, and picked up good habits (better to waste food than to waste health). Had I first gotten fat and then tried to change habits, I think my body would have shifted into a different metabolism and it wouldn't have worked.
>(better to waste food than to waste health)
I haven't heard that particular phrasing before, but rather like it. Personal mantra has been stuck on "food waste is a cardinal sin" for a long time now, so any antimemes that help unwind such negative overreactions are...helpful. Kind of at a Copenhagen interpretation of eating compromise: I can't be blamed for wasting food if I never personally touch it, so it's better not to try new stuff if there's a chance I won't like it. Not the best tradeoff for trying to move out of autist gastric comfort zones, e.g. eat plain dry Cheerios for dinner cause they can't hurt me. (Although possibly adaptive for the modern food environment, there's...a lot of incredibly weird artificed shit out there being passed off as "food" these days. Pizza flavoured potato chips? That's a no for me, dogg...)
In my case, I've always tended to eat lots of desserts and few or no veggies, but I still gain little or not weight.
I would love if we had the IQ pill. Even in a world where I don't benefit from the IQ pill, such that people who have a naturally higher IQ than mine still exist, but there's a new 'IQ floor' at my IQ I'd be happy with that. I feel like I get enjoyment out of intellectual sharing with peers, and disappointment when a conversation is one-sided.
Also the world would hopefully be much better?
Exactly. Kind of like if we had widespread, artificial intelligence everyone could access.
Yeah plus, wouldn't it be better if the people who do spend a ton of time, effort, and "willpower" on not being fat could have those resources freed up to be directed somewhere else.
Didn't Musk himself admit in a tweet he was Dittman?
I interpreted that as 'I am Spartacus' - the evidence that Dittmann is a real guy and the owner of the account is pretty convincing, but Elon sincerely believes that doxxing is bad, so he banned everyone involved, suppressed the link, and lied about the answer so you wouldn't feel tempted to click it (since the headline doesn't directly say).
SPOILER ALERT!!!
I couldn't do it in my head so I had to "cheat" and write down Bayes' theorem. But here it is:
Let's say that World 1 (W1) is the possible state of the world in which you already have the special goat, and you want to know the conditional probability that you're in W1 given the reveal of the ordinary goat (R).
p(W1 | R) = p(R | W1) * p(W1) / p(R)
p(W1) = prior probability of W1 with no additional knowledge = 1/3
p(R) = prior probability that the host will reveal the ordinary goat rather than the special goal = 1/2
p(R | W1) = probability the host will reveal the ordinary goat conditional on being in W1 = 1 (he can't reveal the special goat in W1 because you have it)
Then just plug everything in:
p(W1 | R) = 1 * (1/3) / (1/2) = 2/3 chance you already have the special goat, and should stick with it.
The intuition behind is that the host is more likely to reveal an ordinary goat in the world where you're already holding the special goat than in the world where you're not, so it is a useful piece of information.
If one remembers the solution for the original Monty Hall problem, it becomes easy.
In the original problem, if you switch, you have a 2/3 chance of getting the price.
In the special goat problem, nothing regarding the regular prize has changed, so it still has a probability of 2/3 of being behind the door you did not pick. So to get the goat you want to reverse your strategy.
That's very clever in its simplicity!
16. Please consider replacing to this writeup by Maia who apparently did all the legwork for Spectator and got left out of the byline https://maia.crimew.gay/posts/adrian-dittmann/
The way I find the most intuitive to get to the solution of the (really fun!) Monty Hall variation knowing the original is basically:
In the original Monty Hall problem, I think there is a 2/3 chance I have a door with a goat, and therefore 1/3 chance the other closed door has a goat.
That is still true in the new variation! However, now since I can see the normal goat, if I have a door with a goat, it must be the special goat. So there's a 2/3 chance I have the special goat.
But the "this is still true" has a big "feels intuitively right to me but [citation needed]".
----------------------------------------------
I think the rigorous way to describe it is more like:
I have a 1/3 chance to pick Car (C), Normal Goat (NG), or Special Goat (SG) at the beginning. Then, Monty will reveal a goat. This means we have the following scenarios with the following probabilities:
(1/6) Me: C | Monty: NG
(1/6) Me: C | Monty: SG *
(1/3) Me: NG | Monty: SG *
(1/3) Me: SG | Monty: NG
But the starred options are impossible because I can see Monty did not reveal the special goat. So staying gives me a (1/3) / (1/3 + 1/6) = 2/3 chance of getting the special goat.
I still feel like my quote-unquote "rigorous" solution is missing something here because despite being fairly confident it's correct, it feels hand-wavy to say "we know the starred scenarios are impossible so we can just vanish their probability mass into thin air and sum the rest of them". I think it'd be more convincing with a visualization.
Vanishing their probability mass and renornalizing is how Bayes updates work when you get information that rules out some cases!
My answer to the Monty Hall twist, before looking at other solutions:
Let's walk through all the scenarios:
1. You select the Car door. Monty randomly shows one of the goats. The problem specifies that, by chance, we know we are in the branch where Monty shows the bad-goat, so the Good-Goat is behind the other door. Here we get the Good-Goat if we SWITCH.
2. You select the Good-Goat door. Monty is forced to show the bad-goat. Here we get the Good-Goat if we STAY.
In the even we select the bad-goat door, it's not possible for Monty to then show us the bad-goat, so this possibility is eliminated by the setup of the problem. So the two possibilities listed above are the only options, they are equally likely, and we don't know which we're in. So we should not prefer to stay or switch.
I wrote a simulation and it turns out I'm wrong. The simulation says you should only switch 1/3 of the time. So you should stay:
https://pastebin.com/YtvBWpc5
choices: 49914; # should switch: 16750; % should switch: 0.3355771927715671
Yeah, the issue is that #1 is only half as likely as #2 because half the time in #1 Monty will show the Good-Goat.
A similar puzzle that I find helps with the intuition:
Three prisoners named A, B and C are locked in the dungeon. One of the three will be pardoned by the king, the others will be executed. The jailor knows which one is to be pardoned, but hasn't told them yet.
As the jailor walks by A's cell, A stops him, and asks him for a favor. "Look," A says, "at least one of B and C is getting executed tomorrow. Maybe both, maybe just one, but definitely at least one, since there's only one pardon for the three of us. So pick one of them that's getting executed, and tell me that they are being executed."
The jailor shrugs. "Sure," he says. "C is getting executed tomorrow. Not telling you about B, or about yourself."
"Ha!" says A. "You've fallen into my trap! My chances of survival used to be 1 in 3, because only one of the three of us was getting pardoned. But now I know C is getting executed tomorrow, so one of me and B is getting pardoned. So now my odds of survival are 1 in 2!"
I don't find that that helps with the intuition at all. :-/
Is there a joke I'm missing that every other link is about China and all the Twitter screenshots have the language set to Chinese?
I have my Twitter set to Japanese so that the #TrendingStories on the sidebar show up in Japanese (which I mostly can't read) and don't capture my attention.
Ohhhh, now I feel bad for misidentifying the language (which I also can't read). I feel like I'd be too distracted by the symbols I can't read for that to work for me but my brain already does a good job of filtering out the sidebar.
An easy way to distinguish them is the hiragana: they look very different from the kanji, and should stand out at a glance. If they're there, it's Japanese. If it's all kanji, it's Chinese.
To distinguish them from Korean, you can just learn to read Hangul: https://www.ryanestrada.com/learntoreadkoreanin15minutes/
Distinguishing Korean is easy, Korean has a lot of circles while Chinese and Japanese never have circles.
ぱぴぷぺぽ
…I'd say that Korean is easy to distinguish for the exact opposite reason, it has lots of perfectly right angles (at least on the computer). I guess the circles are also part of it, but… I dunno, Hangul as a whole just looks very "blocky" to me, including the circles.
Well, I'd say that Hangul is easy to distinguish because it looks absolutely nothing like characters. That's true, but I don't think it's going to help anyone who isn't already aware of the fact.
> Chinese and Japanese never have circles.
This is untrue; you're not unlikely to see strings like 二〇二五 in Chinese.
If you want a hard way, the second half of the word "translate" that appears below all the tweets is in a form unique to Japanese. ( 訳 )
This makes me curious how often Scott takes advantage of Twitter's helpful offer to automatically translate English tweets into Japanese.
You could use an adblock extension, as I would've assumed you do already. Checking it right now, the uBlock Origin element picker doesn't seem to have any problems picking out the trending-stories widget and blocking it.
Thanks for the tip, I should try that.
Note that this only works in FF since Chrome banned UBO.
https://chromewebstore.google.com/detail/ublock-origin/cjpalhdlnbpafiamejdnhcphjbkeiagm
Feb 5, uBO still works on chrome.
Is this the perfect combo of know enough to manage the menu, not enough to be distracted? Optimizing only for distraction Id go with something that makes no sense at all.
You can change the location used for the trending content independently from the site-wide language, by clicking "show more" on the trending content, then opening the settings menu and setting the location there. My X is set to English, but nevertheless my trending stories are in Japanese because I have set my location to Tokyo.
Huh. I like that trick.
Do you use ublock origin? If so, note that it can block more than just ads -- the "block element" feature (in the context menu) can nuke most things (though some especially offensive sites will try to make element filtering hard by randomizing the element ID or class name -- I don't know if twitter is one of them since I don't use it myself). Whether that's a noticeable improvement on unreadable text I don't know. I suppose it depends on whether switching twitter's language also alters elements you'd prefer to be in English.
"mostly"
Nice humblebrag here :)
May be easier to block the iframe with adblock or a similar custom ad blocker
Thanks for the shout out! Christ and Counterfactuals is great, hope I can write for them more in the future.
I don't understand how the variation is meaningfully different from standard Monty Hall. It plays exactly the same, you pick a door, host reveals a goat, if you switch at this point you have 2/3 chance of getting the car, so don't switch since you don't want the car.
That was my thought. The standard solution is to switch; if switching was also the solution here, that would mean it simultaneously increased the chances of getting the car and getting the goat. Which doesn't make any sense.
The numbers happen to work out the same, but they're conceptually distinct. Considering a variant of the problem with four doors might help clarify the difference.
Well, with 100,000 doors:
Standard Monty Hall: you pick one door, Monty reveals 99,998 goats, the remaining door has a car with probability 99,999/100,000 and your door has a car with probability 1/100,000. You want the car, so switch.
Also-standard Monty Hall: you pick one door, Monty reveals 99,998 uninteresting goats, the remaining door has a car with probability 99,999/100,000 and your door has one with probability 1/100,000. You want a special goat, which isn't the car, so stay.
Going into the "new" problem more formally, 99,998/100,000 of the original probability mass is ruled out by observing 99,998 boring goats. So there are two equally likely options: we picked the special goat, or we picked the car, which I'll normalize up to 1/2 at this point.
If we picked the special goat, the conditional probability of observing 99,998 boring goats is 1.
If we picked the car, the same conditional probability is 1/99,999. So the total space we're working with has a non-normalized size of 1/2 we-have-the-special-goat and 1/199,998 we-have-the-car. After normalization, 99,999/199,998 and 1/199,998 give us actual probabilities of 99,999/100,000 and 1/100,000, which is what we already knew we'd get from the standard Monty Hall solution.
What was the conceptual difference there?
A variant that gives you different answers is with four doors, three goats, and Monty only opens ONE door.
You'd need to actually specify what you have in mind. Are you switching to both unopened doors?
If you scroll a little further down this thread, you might see that I did.
There are four doors, and three goats (and one car). You pick a door, and Monty opens ONE of the other non-car doors to open. You then decide whether to stick with the door you initially chose, or to switch to one of the closed doors.
I agree it‘s simply the same. It seems suspiciously easy, like one must have missed something, if one thinks of it only in terms of one’s own goal and never gets to the point of thinking about Monty Hall’s intentions. What seems to make it difficult for those who get to that point is an intuition that Monty Hall is your opponent and wants to deny you what you want, see the first comment above (by Hilarius Bookbinder, and Shankar Sivarajan‘s reply).
You might have misunderstood my comment. Monty is trying to HELP you by giving you information.
Sorry that was unclear, I referenced your comment because I thought it was an appropriate and succinct reply to Hilarius Bookbinder.
Even though apparently we‘re not on the same page —- you‘re saying in other comments that the variant Monty Hall is conceptually distinct, whereas I agree with Vim it‘s the same, except that the contestant now wants to „lose“, and therefore strangely easy (for those who know the original Monty Hall —- some of the confusion in the comments may be, not for the reason I pointed out in my first comment, but simply because people don‘t understand the original Monty Hall).
> since you don't want the car
this is where you got it wrong / the difference with the standard problem.
You want _one specific_ goat.
You can see how they differ by calculating the probabilities before he opens any door.
p(your goat) = 1/3
p(not car) = p(any goat) = 2/3
1/3 != 2/3 -> "you don't want the car" is not equivalent to what they are asking
The thing is that, for the problem to work at all, Monty has to not reveal your goat: If he does, it's just the classic problem all over again (you want the car over the normal goat). That ends up making it similar to the original problem, but in reverse.
I didn't calculate the probabilities, so they might end up being the same (I'd be surprised if they are though)
But regardless, it's not the classic problem all along.
In the classic it goes:
You pick p(any Goat) = 2/3 or p(car) 1/3
presenter open p(any goat) = 1
new:
you pick p(good goat) = 1/3 p(bad goat) = 1/3 p(car) = 1/3
presenter opens p(car) = 0, p(good goat) = ?, p(bad goat) = ?
The structure of the problem is completely different.
The probabilities _are_ the same, and yeah it relies on the host opening the goat. The problem before that is different, but once we're asking what happens when it opens to the "bad" goat it collapses to the same as the original.
I'll hopefully learn the lesson not to be so confidently wrong in the future!
Also, car would still be great. So there is no losing if you switch doors. You have a 50/50 chance of getting a car, the same for the special goat, and 0 percent chance of getting the bad goat (the only losing option) if you switch.
Here's a variant which might make the difference clear:
There are four doors, and three goats (and one car). You pick a door, and Monty opens ONE of the other non-car doors to open. You then decide whether to stick with the door you initially chose, or to switch to one of the closed doors.
Now you can compare the "classic" version where you want the car or this version where you want a special goat.
Honestly I think this is a semantics argument. The original Monty Hall problem and this variation "as written" collapse to the same solution (inverted).
But the general problem of wanting something different from the presenter who always opens a non-car door would have different probabilities given different doors
This is the simplest and most elegant solution I've seen, at least for those who know the original problem and solution.
This is how I thought of it. The answer to the original Monty Hall problem gives me the answer to this problem.
> Related: Some Musk supporters in the comments suggest that maybe he hires the Chinese guy to level up his account, but his accomplishments (eg speedruns) are still his own?
It's more like somebody claiming to be an expert chef, and struggling to make scrambled eggs when streaming it. Really weird thing to lie about: https://www.youtube.com/watch?v=FmEe3eUPWq4
Regarding #17, the musk video game thing. It is clear to anyone who has any experience with POE that Musk has literally NO knowledge or skill in that game. This is massively overdetermined, unarguable, and it should suggest that none of his gaming accomplishments are legit.
I'm kind of assuming this was a ploy to garner clout with the tech bro/nerd demographic (his original base, which contributed to his success as it conferred status to his employees, allowing him to retain more high-end talent). If he just got on stream and dicked around with a noob character that would have been way more endearing and authentic
On the topic of his steroid use - he also appears to have "HGH gut", i.e. his notorious distended abdomen. This is speculative but likely all things considered
More relevant context: a speedrun in POE (called a race) *starts* with a level 0 character and no resources. So leveling is part of the event. Getting someone else to level for you would still be cheating.
I think going on and just being a noob would have been much better pr .
Well, it's plausible that he's decently competent in Diablo 4, even if he hired someone to grind there as well. D4 is much simpler though, and doesn't carry nearly as much "gamer cred" as PoE, so he "branched out". But yes, this debacle made me question my perception of him in general. If he's willing to lie this blatantly about such an insignificant thing (seriously, who cares about some "hardcore" vidya leaderboard?), what else is he lying about?
Well... a lot.
The most consequential ones are persistent lies about the capabilities of Tesla. These include claims that Tesla full self driving already was safer than a human... which were claimed about a decade ago, with FSD announced as being just a year away almost every year for the last decade (including 2024). Claims that Tesla will operate a fleet of robo-taxis, or that Tesla owners can make passive income by operating their cars as robot taxis, fold into this. We may also remark the 2nd gen Tesla Roadster, for which he took likely $250+ million in preorders for starting in 2017 and has yet to see release, which is supposedly taking so long because he's sticking rocket boosters onto it.
Last October, the Tesla We Robot event worked very hard to look as though the robots deployed were autonomous and speaking directly to guests, when they were actually being teleoperated and spoken through by Tesla employees. Both the present capabilities and expected development time of the Optimus are constantly stated to be far in excess of anything observed. I wrote about this event a while ago, and Rodney Brooks has commented on both: https://rodneybrooks.com/predictions-scorecard-2025-january-01/
Given that Tesla is universally acknowledged to be massively overvalued relative to their actual production or returns to investors (dividends to date: $0), its valuation is explained by its promises of enormous future profit from the technology it has developed, not from its actual production of cars or anything else. This is an impression upheld by constant dishonesty to the tune of hundreds of billions of dollars. At first it was going to be a revolution in battery technology, then it was self-driving, now it's household robots, there's always something new on the horizon to take attention off the last promise.
Then there's claims that SpaceX will take humans to Mars in 2029 and that there will be a million-person city on Mars by 2050 (note that in 2011, humans landing on Mars was to be expected in 2021). And then there was the Hyperloop, hype for which has finally died down after going precisely nowhere.
I'm also under the impression that Musk has lied a good deal about his own biographical details, including his matriculation into Stanford and the idea that he spent a year traveling around Canada making a living by doing odd jobs and lumberjacking. There's lots of claims that he's lied about way, way more stuff, but I haven't looked more closely into those and will leave it at that.
Basically, he lies about a lot of things, and it didn't start recently. A lot of these lies have gone without enough critical attention by a combination of money (lots of people are heavily invested in Tesla and need the stock to keep climbing) and friendly media attention (he's a larger-than-life figure who consistently generates headlines by announcing incredible predictions and plans for technology). It's only lately, now that he's under way more public scrutiny and genuinely does seem to be more unstable than before (cf the recent, unfortunately paywalled, Sam Harris piece), that more people are realizing how often he does this.
> And then there was the Hyperloop, hype for which has finally died down after going precisely nowhere.
IIRC, even Musk himself admitted that the Hyperloop was never a serious proposal and was just a cynical ploy to try to kill CAHSR. Which at least looks a lot better for him than thinking that Hyperloop could have ever possibly worked would.
It is obvious to anyone playing either D4 or PoE that Musk is not familiar with ARPG on the most deepest, fundamental level that anyone who spend even a few hours in either would not be. (Not knowing how loot system works, how items are picked up, etc)
This level of falsehood should call _all_ his "accomplishments" into question. How likely would one would decide to adopt this level of falsehood on a whim one day?
Yeah, all those rockets and electric cars must not be real.
The god-tier PoE2 character was real. Musk's contribution was not.
Tesla is not a company that Musk founded.
SpaceX is not a company that Musk founded.
Many people are not aware of either fact. Letting other people do impressive things and fronting would seem to line up with Musk's behavior.
Are you suggesting that Musk is due no credit for the successes of Tesla or SpaceX?
“No credit” seems a bit much.
But did he get more credit than he deserved, by getting some that others should have had? Possibly. The anecdote suggests that he would have few qualms with it.
(Should we infer that he did not deserve his eleven-figure pay package by Tesla, or that other people – or the company itself – should have had some of it? I genuinely don’t know.)
> SpaceX is not a company that Musk founded.
You're wrong.
As Adrian said, Musk did not originally fund Tesla, but he very much was a founder of SpaceX.
More to the point: we have extensive third party documentation of Musk’s heavy involvement in SpaceX.
The man is a chronic fabulist, no doubt, but it’s a mistake to not give him credit for a sizable fraction of the astonishing accomplishments of SpaceX.
People are complicated.
Apparently this is correct. I'm not sure where I previously saw this was not the case. I was under the impression he had only stepped in after the intial demo. At a minimum then, he has a lot of money and can convince competent people to work for him and build successful things. This is not nothing.
This specific incident still casts heavy doubt:
A rich guy buys the expertise of a game player or players who make a very strong build. He claims to have done all the work. Obvious to everyone he contributes nothing but money for time. What do you credit him with here?
A rich buy buys the expertise of rocket scientists and engineers to make a very strong rocket. We have nothing but anecdotes (many his own) that he contributes more than money for time. What do you credit him with here?
What a bizarrely inane comeback.
The fact that Elon is as terrible at POE2 as me is strangely reassuring and makes me feel more kindly towards him 😁
We're all struggling with the new mechanics, lack of progression, and no good gear drops!
> The fact that Elon is as terrible at POE2 as me is strangely reassuring and makes me feel more kindly towards him 😁
Do you also relate to going on a podcast tour and bragging about your gaming accomplishments, and on the stream in question say things like, "My only complaint about Path of Exile 2 is that it's too easy."
As said over and over in all the comments: it *would* have been relatable if Elon musk had simply streamed himself dying over and over to the Act 2 campaign boss. The problem wasn't that he was bad. The problem was that he is insufferably smug about how good he is at something while so blatantly lying about it.
Since we're now living in Topsy-Turvy World, I've burned through a lot of my outrage stocks and now reserve the rest for really important and egregious stuff.
Musk lying or enhancing his game prowess is not one of those things. It's so silly I have to laugh. Yeah, if he's cheating, ban his ass (but GGG has its own problems what with the Tencent buy-out and people being pissed off they went on holidays over the Christmas and didn't address issues with the early access etc.)
In unrelated news, why do the Democrats keep shooting themselves in the foot and making me defend, or at least stand on the side of, guys I generally range in feelings towards from "eh, he's an idiot but who cares?" to "I would be very happy if they were fired into the sun".
I don't like Sam Altman! I disapprove of Sam Altman! I would be very happy if Sam Altman, Big Tech and the rest were investigated! But you're not going to do it by resurrecting McCarthyism, and I am forced through gritted teeth to agree: did you send out the scolding letters to Big Tech donors to the Harris campaign, Lizzie? People have the right to donate to the political party of their choice, and it's not enforceable to have senators doing what looks damn like "you can only donate to *us*, not them".
https://www.cnbc.com/2025/01/17/sam-altman-posts-letter-from-senators-concerned-about-openai-donations.html
"Are you now, or were you ever, a donor to the Republican party?" is not a good look, Lizzie, see what I said about McCarthyism and a senator getting ready to set up their own new House Un-American Activities Committee:
https://x.com/sama/status/1880303311842341152
"These donations raise questions about corruption and the influence of corporate money on the Trump administration, and Congress and the public deserve answers. Therefore, we ask that you provide responses to the following questions by January 31st, 2025:
1. When and under what circumstances did your company decide to make these contributions to the Trump inaugural fund?
2. What is your rationale for these contributions?
3. Which individuals within the company chose to make these donations?
4. Was the Board informed of these plans, and if so, did they provide affirmative consent to do so? Did your company inform shareholders of plans to make these donations?
5. Did officials with the company have any communications about these donations with members of the Trump Transition team or other associates of President Trump? If so, please list all such communications, including the time of the conversation, the participants, and the nature of any communication."
I await with much goddamn interest the revelation that Lizzie made the same requests of any companies that donated to the Biden inaugural fund. If she has reason to think Altman violated campaign finance guidelines, then go after him in SUNY (that does seem to be the court of choice for such, does it not?)
Otherwise, it's none of her business (she's senator for Massachusetts, Bennet is for Colorado, and Altman is living in California so she's not his representative and he's not her constituent) and she does not get to tell anyone "you are only allowed to donate to people I approve of". She's on the Banking Committee, which I don't think covers election or inaugural fund donations (correct me if I'm wrong) so this is just a piece of busy-body snooping which has no legal force. Any lawyers/political experts out there tell me more and if she can indeed compel him to tell her anything other than "none of your business, talk to my attorney".
It would be reassuring, were he not double and triple down on lying, being comically incompetent and banning people left and right who point this.
https://x.com/elonmusk/status/1528955104463814656
I don't know POE but I can confirm his "double shield wielder" Elden Ring build he showcased on 23 May 2022 (fat roll mage with none of the typical stuff you'd see to compensate for heavy armor, estus flask not bound to quick use) has never been good in any version of the game.
At this point it's happened multiple times so I don't think we can say "oh it's just him being naïve about what his smurfs are grinding for him," he is obviously making some point we're not getting. You don't walk back to the same sort of petard that just hoisted you two years ago.
> he is obviously making some point we're not getting
I think you're discounting the possibility that he is genuinely delusional about his own competence.
I played PoE 1 for about 9 years, I finished all the challenges in a bunch of leagues, I've been playing PoE 2 since the launch in December. Raj is 100% correct here and it's obvious to me from the video that Musk has no idea what he's doing in it.
It's plausible he picked up how to play D4 well enough to buy an account and actually use it, but the PoE video betrays misunderstandings of fundamental mechanics that anyone who has finished the campaign would understand decently. It's impossible to think he reached the very endgame of Path 2 (in Hardcore, no less!) while thinking the item level of his weapons was the important part, for instance.
Happy to answer questions about this (or the game in general), it's something I know a lot about. Not sure if Musk has just gone nuts wanting everyone to think he's the best at everything, or if he really has been like this all along.
What is your opinion of PoE2 in general? The comments I'm seeing are veering between "this is terrible" and "what are all you scrubs whining about, git gud, I breezed through it" which is not terribly helpful.
The mechanics are different from PoE1 and I had to unlearn a lot of habits I picked up there. So far I hate it and I love it - when you do kill the act boss it really does feel like an achievement but my God you have to grind to get there.
Note: I am not a gamer of any description or by any means. I never spent my childhood/teens playing games. I tootled around a bit with Torchlight before trying PoE because everyone on my dash was talking about it. I have no idea of the mechanics - when the discussion starts with "look for an item with this suffix, then you can get 4% extra DPS by rolling an exalted on top of the base damage but not if the crits" my eyes glaze over. (God bless you, Pohx Kappa, for build guides; Righteous Fire was an absolute *revelation* to me: "Whee! I can just run through mobs and kill them without lifting a finger! I can just stand here and let them fling themselves at me and they die!") I don't engage with the trade mechanic so I have to pick my gear up off the ground like a savage (that was extremely funny to me, Elon manually picking up and transferring to the inventory) or hope some vendor in some town finally this time has the piece I need and that I have the currency to buy it.
But just maybe he *has* been playing like that all along, it's a habit he picked up when he started and he never changed? Yeah, he's most likely lying and cheating, but it's not absolutely impossible that he just plays badly, but does genuinely play (not advanced to where he claims to be, of course; that probably is someone farming for him). Because I'm not A Gamer, it's not that important to me, it just makes me laugh. But if you take games seriously, I do understand why this is A Mortal Sin worthy of burning at the stake.
>notorious distended abdomen
Notorious! People are talking about it!
What a surprise that most of the comments to date are about the MH variation.
However, I don't think it's an interesting variation at all - it's just collapses to the original problem!
In the original problem, you should switch, because that gives a 2/3 chance of getting the car.
Here, again, if you switch (to, wlog, slot C), you have a 2/3 chance of getting the car, and if you don't switch, you have a 1/3 chance of getting the car. (None of the modifications to the setup change that, MH is still definitely-not-revealing the car).
Since the other remaining option is *good goat*, if you switch, you have a 1/3 chance of getting good goat, and if you don't, 2/3.
So now you shouldn't switch, but there's nothing interesting here beyond the original problem.
I agree - there are two non-prizes and one prize. What's different is that you've changed the formulation from two indistinguishable non-prizes to distinguishable non-prizes.
Yep, the fact that people find a trivial variation interesting even here is the most interesting aspect of this situation. Just goes to show how unintuitive probabilities are for humans.
I agree, except everyone's forgetting Murphy's Law here. Were I in either Monty Haul problem, I would somehow always end up with the least desirable prize.
If you have a 50% chance of winning, then you have a 75% chance of losing.
Ozy Brennan's linkpost had one which I thought might be interesting to the crowd here. Maybe claims of declining testosterone levels are just an artifact of a change in how we measured it: https://eryney.substack.com/p/maybe-its-just-your-testosterone
I didn't work very much in BIG big buildings that weren't government or industrial so grain of salt, and I'm also just kinda restating the dude with the expensive degrees' case from the shovel swinging level.
A good chunk to most of the expense of a modern skyscraper happens First:
before the first structural pillar reaches above grade and and Second:
when you have to do all that systems bullshit; wiring and plumbing and hvac and networking and and and.
Those steps are absolutely mandatory and people have kinda honed in on the cheapest way to build buildings that reliably don't fall down, which involves basically building a big cage of columns and struts and beams and ties, then building the rest of the building around it.
So the only place left to cut costs is in the category of "things that are nice for people inside the building" and "things that are nice for people outside the building"
Those costs can go as high as you want them to go, and given I've never seen a big building get more than a lick of paint and some precast concrete panels as decorations but I have seen some quite nice interiors, so if there is any fat left in the budget it's going into nice lighting and fast elevators not beauty for the enjoyment and edification of the hoi polloi.
A few of thoughts on a16z.
First, comparing private fund IRR to public index IRR is dumb and that's why no one in finance actually does that. They'd calculate e.g. a market-adjusted version of IRR called direct alpha instead which keeps cash flow comparisons in order.
Second, a16z have a bunch of dedicated crypto funds and that's probably where their crypto performance is concentrated, rather than in their flagship funds.
Third, their flagship funds constitute less than 10% of their cumulative fund sizes over that period, so using these as indicative of a16z's overall performance is misleading.
Fourth, venture capital in the U.S. has underperformed the market in general in the last ~25 years so this shouldn't be huge news.
Why is it dumb to compare public and private IRR?
If VC constantly underperforms the market, why do people still invest in it?
IRR has some flaws as a tool for comparing returns:
Multiple IRR values:
When a project has cash flows that change sign (from negative to positive and back again), it can result in multiple IRR values, making interpretation difficult.
Not sure if that's what he's referring to or not.
I touched on that in my comment, but basically they still invest in it because of a myriad of reasons:
- VC offers diversification benefits, since most wealthy people have a huge portion of their capital tied up in "the market" already and are seeking a broader, more comprehensive allocation (which could be for example 10% of their investible funds in VC, 10% in real estate, 20% in bonds, 20% at a large hedge fund, 40% in the S&P)
- it is more tech-weighted than the market and some investors care more about sector allocation than broad market returns
- it offers a chance at much higher returns if luck + successful investing go hand in hand (a16z isn't the only VC fund out there, and to an individual investor it's a fallacy to say "VC offers lower returns than the market" because while you can invest in the whole market, you can't invest in VC as a whole - you have to pick a fund, a fund manager etc., which creates the possibility that your VC fund manager will outperform the market or any other benchmark, relevant or not)
- - this point also works to explain to people who believe hedge funds are trash because they have lower returns than the market on average: 1. you're never investing in the average hedge fund, and that's becoming truer every year as consolidation towards the large pod shops continues, 2. hedge fund strategies typically pursue higher risk-adjusted returns (their Sharpe ratio and other related ratios), so when you invest in a hedge fund you're not interested in getting 7% average return for 16% average volatility, but maybe 5% return for 4% volatility), 3. there's a wide range of hedge fund strategies that have different risk/return characteristics, i.e. you're not expecting the same performance out of a long-short-equity fund and a short-volatility fund. The same applies to VC: you have generalist VC funds who invest in a wide range of tech sub-sectors, and you have specialist ones that are more focused on fintech, crypto, payments, AI, datacenter infra etc.
- it offers a seat at the table for large investors who want to grow their corporate access/network (this is one of many non-return incentives)
- it's part of the investment mandate of a large asset manager or alternative investments company
- it's a bet on the future growth of VC-related fields (like AI, fintech and other subcategories that have definitely grown faster than the market)
- personal preferences regarding the exciting nature of investing in a high swing, low win rate type strategy (i.e. if you're the guy who invested in the fund that financed Facebook, that gives you bragging rights and potentially a future career out of it, if you're the guy who lost his money or under-performed the market, you don't have to advertise your failure and you probably had most of your money in the market/at a large wealth management firm anyway)
Can you explain why you would care more about sector allocation than broad market returns?
When you say there is an element of potentially lucking into much higher returns, would that look like a particular fund (say, the 2011 A16Z fund) being massively positive one year thanks to a single extremely successful startup? If not, what would it look like to a person who bought a specific VC fund?
In what sense are you never investing in the average hedge fund? I understand you have to invest in some specific hedge fund, but shouldn't the average person (who doesn't have some special access letting them invest in the best hedge fund) assume that whatever hedge fund they invest in will have average performance?
Can you explain what it means to offer a seat at the table for large investors who want to grow corporate access? How does this eventually result in them getting good things?
What does it mean to say both that VC-related fields have grown faster than the market, at the same time that VCs have underperformed the market?
1. You could care more about sector allocation than broad market returns if you have a bias towards a sector ("I believe tech has a brighter future than a bunch of tech + a bunch of consumer cyclical + consumer defensive + financials + healthcare + energy + real estate + utilities + communications + industrials put together"), if you have an informed opinion and/or material non-public information that supports investing in one sector over the broader market, if you have ESG/green energy/anti-military-industrial-complex views and you want to make sure your investment is not solely focused on return but also on social impact, if the investment research from your equity/fixed income/quant analysts identifies more profitable opportunities in a sector relative to others, if you're worried about the impact of certain presidential executive orders on certain industries relative to others, etc. Basically anything from "I know nothing but I have a bias" to "I am better informed than almost anyone else in the world on this topic" can justify caring more about sector allocation than broad market returns.
2. It could be from a single extremely successful startup, or it could be a range of a few successful startups that went on to be valued higher in subsequent funding rounds, increasing the unrealized gains and portfolio value of the fund. Most VC funds have a 8-10 year investing horizon, so LPs commit for a very long time by typical investment standards. One element of lucking into much higher returns comes from the many swings, few hits nature of VC investments. Simplification ahead: Imagine you have $100m to deploy, and your mandate says you need at least 20 investments for diversification purposes. Let's say 5% of companies that are investable into in the VC landscape have a >10x potential, and 95% will go to zero or won't be worth the trouble. On your 20 investments, that'd be 1 investment that is expected to >10x. The luck factor is in the spread of ">10x". Say the distribution goes like this: 1% of companies will turn into 20x, and 0.05% will turn into 100x. If within your 20 investments, the one that goes >10x goes 10x, that's great, but the eventual failure of the remaining 19 probably won't make you a star manager. If your >10x company goes >100x, how much of that delta came from your genius analysis of the company and how much of it came from pure luck? Since the entire business model of VC is predicated on getting a few calls very right, the luck factor can expand your success further than your raw analytical skill can. This model is a gross simplification of the process; many funds don't just invest but also use their own network of experts and portfolio companies to create synergies and help their companies expand and reach the next funding stage.
Another element of the "luck" aspect is that certain periods in time offer much better investment opportunities than others. The first graph on this page shows the average VC fund return by launch year: https://emaggiori.com/venture-capital-returns/ You can see that VC funds that started in the 1990s had the best opportunity set for investments, and then the next best moment to start a VC fund was around 2007-2011, when tech started its 15-year strong bull run. You could be very talented, open your fund in 2021, and be shit out of luck because VC valuations were sky-high. You could be very talented, open your fund in 2010, and have a statistically much better opportunity set for your investments. In both cases, the talent is very much real, but the luck part is fickle.
3. The average person cannot be a hedge fund investor as they have to be an accredited investor. I guess by the law of averages, the average hedge fund investor does invest in the average hedge fund, so in that sense I am statistically wrong. What I wanted to point out is that even if you don't have access to the very best hedge funds, if you're an accredited investor you can still contact many hedge fund managers and get your money in the door. In that sense, even the slightest amount of research done on your own time (for example looking at average risk/return profiles for different strategies, and then looking at the track record of the fund managers you're talking to) should turn you from "a chill guy who invests kinda randomly into the US public equity markets" (someone who buys a broad market index) to "a guy who invests in a specific HF strategy with specific risk/return details, with a specific very real fund manager guy with a real track record", which to me involves many more layers of due diligence than investing in the market. Say I invest in a merger arbitrage hedge fund, I should know that this strategy requires leverage, involves left tail risk, and I should probably ask the fund manager if they're more interested in cross-border & vertical mergers or in friendlier, domestic mergers because that also has an impact on risk/return expectations. It's difficult for me to accept the saying that "the average hedge fund investor [...]" because it hides a whole lot more diversity than "the average equity market investor", basically.
4. Individual LPs might want to create a close relationship with a powerful venture capitalist, because some VC people can be helpful in raising capital for their own projects or companies. Corporate venture capital, CVC, involves nurturing niche goods & services ideas that the company doesn't want to devote in-house R&D dollars into. As you can imagine, CVC decision making involves different goals than your typical VC fund. The CEO and board of a company that has a CVC arm might want to dump money into it for competitive reasons rather than focusing on returns.
5. Some VC-related fields have grown faster than the market, but at the same VCs on the whole have underperformed the market. You can see here https://cepres.com/insights/financial-services-sector-shows-outperformance-in-vc-deals-through-2019 that financial services and biotech VC IRRs were much higher than the market return across the 2003-2019 period they observed. VCs on the whole have underperformed the market because the money-weighted IRR across all sectors have turned out to be lower than the market return.
The only way comparing private IRR to public IRR makes sense is if the cash flow dates and outflow amounts are identical in both scenarios, since the timing is critical for the IRR calculation. Private funds make very irregular cash flows, and so the only way you can make an apples-to-apples comparison is if you do a counterfactual using private fund cash flows with concurrent public index returns (this is essentially what the direct alpha metric does via discounting). The graphic on Twitter says it uses Cambridge Associates IRR, but even CA says "Due to the fundamental differences between the two calculations, direct comparison of IRRs to AACRs is not recommended" (https://www.cambridgeassociates.com/wp-content/uploads/2018/07/WEB-2018-Q1-USPE-Benchmark-Book.pdf).
To that end, thinking of the IRR of an investment in a public index is weird to begin with. The whole point of IRR is to account for the timing of outflows and inflows, so context makes no sense with respect to investing in a public index; it's just an ordinary total rate of return. But again, that makes it an inappropriate comparison. Incidentally, I can't even find Cambridge Associates reporting a net IRR for a public market index anywhere, which highlights how odd this comparison is.
IRR has other problems like the multiple values thing Gordon mentions. It also makes an implicit assumption that all distributions are re-invested at the IRR rate, which amplifies (i.e. exaggerates) the sign of any IRR away from zero. LPs' uncalled capital often is sitting in something like the S&P500 before it's called into private fund, and likewise distributions are often re-invested into something like the S&P. There's a "modified IRR" metric that tries to account for this, but raw IRR certainly does not.
The interpretation of that spreadsheet is also weird. 5 of the first 7 a16z funds do have a higher net IRR than their S&P500 IRR. The ones that don't are very young funds -- this person's data only goes to 2018 and the ones with near zero IRR are still in downside of the J-curve.
I was also too flippant in my remark about US VC underperforming. If you look at the vintages of the a16z funds shown and calculate pooled direct alpha using S&P500 as the benchmark, you'll get a pooled direct alpha of ~5% (meaning public returns with the private market cash flows would come out behind). If you go all the way back to 2000 through today, you'll get a direct alpha of about 0.4%, i.e. basically no difference. (When adjusting for risk however, it is quite possible that the VC alpha would be zero or negative; I don't have the time to crunch through that however.) This is using Preqin data, by the way. Unfortunately they don't have cash flows for a16z funds so I can't look at them specifically.
I guess the bigger point I'm making here isn't that the claim is wrong. It just doesn't look like anything I'd expect from someone who knew what they were doing.
Re #33 pay on results coaching service
> I do worry that even if you officially say “pay on results”, therapy results are naturally fuzzy and hard to assess, and it’s too aggressive to refuse to pay your life coach who’s put dozens of hours of work into your case, so most people will say “yeah, I guess that kind of worked in a sense” and pay the money (this works even better if your clients are “lifelong pushovers”). How would one design a version of this system which avoided this failure mode?
Yeah we attempt to solve this by making the dollar amounts rather large. Generally people aren't going to pay 4–6 figures because of "yeah, I guess that kind of worked". More on that here: chrislakin.blog/p/the-case-for-pay-on-results
Link to the service: https://chrislakin.com/bounty
(Post misses a link?)
Another approach might be to define hard endpoints for success - something that can't be 'faked'. For example, "I always wanted to go skydiving, but anxiety makes this goal seem impossible". Or for someone in crisis, have them make a journal, where they go from a baseline of wanting to commit suicide every day to going a fortnight without any thoughts of suicide.
I always do “pay whenever you feel satisfied”. Subjective is great. Concrete endpoints are hard anyway because goals change a lot
Concrete endpoints can be subjective. I understand that goals change, and perhaps it would help to measure your outcomes if you didn't tie it to pay.
Part of my job is to design clinical trials, and often these have subjective endpoints we have to track. Hard, prespecified endpoints are important, but sometimes these endpoints are things like, "lower back pain", which no outside observer can quantify.
If I want to determine whether to use your services, telling me "these people paid for these services, given a policy of 'pay when you're satisfied'" is useful information, but not a direct answer to my question. It's not a bad endpoint, but it's still a surrogate endpoint. If I'm thinking of engaging your services, I don't want to know if I will ultimately pay for those services within the policy parameters - even permissive parameters such as those. If I'm depressed, I didn't wake up, thinking, "I hope I get to pay for CL's services", I woke up thinking, "I hope CL's services help me not feel so low."
On the flip side, you're hoping that people's lives change, I'm sure. But you're also hoping their lives change to the point where they're willing to pay you for your services in helping that to happen. So for you the hard endpoint of success is, "felt services were worth paying for". These two outcomes seem like they're probably closely aligned, and I'm not claiming otherwise, but since the overlap isn't perfect there's room for interpretation error.
23. The poll is 1.5 years old for a 5 year prediction and very ambiguously worded. I don’t think it’s at all useful for anything but what people were maybe thinking a year and a half ago.
In case anyone else was wondering, that really fertile province in southeastern Turkey is Sanliurfa. It has both an unusually religious local population and a large population of Syrian refugees.
No Amish, Haredim, or Elon Musk, which were my first three guesses.
Even that province had a ~3.5% year over year decline in TFR which is just an insane rate.
Am I crazy, or is the Monty Hall variation exactly the same as the original with a different choice due to different goals?
There’s still the exact same chance the car (the original goal) is behind the unopened, unchosen door: 2/3. The only difference in this scenario is that you don’t want the car.
Yeah that was my thought as well
It's not. It's different because Monty COULD have opened the door with the prize you really want (the special goat), because he thinks you want the car. That he didn't is relevant information to factor in.
I don't think so. The scenario as given is that he showed you the normal goat. The only actual question is "Do you switch?" And the probabilities that inform the decision are the exact same probabilities as in the original: 2/3 chance of the completely untouched door hiding a car. The only reason we make a different decision here than the original is because we don't want the car—we want the thing that has the 1/3 chance of being behind the untouched door (and therefore 2/3 chance of being behind your currently chosen door).
It's "Do you switch, given the information presented?" If you change the probabilities of opening certain doors given the location of each prize, you change the information you get from him opening them.
The scenario is that you're shown the normie goat. He cannot show you the special goat. That would be a different scenario. Therefore the information presented is exactly the same as the original, where the 2/3 chance of the car being behind one of the two doors you didn't choose collapses onto the completely untouched door. The only difference between this and the original is that you want the non-car hidden object, so you don't switch.
Suppose there is a coin where you're 50% sure it's fair and 50% sure it's double sided and always lands heads. You flip it 10 times. It lands heads each time. It couldn't have landed tails, because that would be a different scenario, so you conclude that it's still 50:50.
Yeah, I was completely wrong about that—the fungibility of the goats from the perspective of the host is a)relevant, and b) paramount for comparison to the original problem. Everything else is still true.
The numbers happen to work out the same, but they're conceptually distinct. Considering a variant of the problem with four doors might help clarify the difference.
Conceptual distinctiveness isn't relevant, just the probabilities. Probabilities aren't inherent to object or even object states, just our knowledge of those objects/states, so there's nothing gained by for some reason talking about them as if they're nonfungible.
Have you seen the classic joke of simplifying 16/64 by canceling the "6" in both numerator and denominator? That's basically what you're doing here. As I said, if you have four doors instead of three (three goats, and Monty opens one of the non-car doors remaining after you make your choice), your approach would get you the wrong probabilities.
I saw your elaboration on your 4-door version higher up, and the reason it doesn't map is because you still have Monty revealing only 1 door—in order to exemplify the underlying probabilities with these more-door versions, the crux is that you always have to be left with just 2 doors: the one you originally chose, and one completely untouched door.
It's interesting because in the "Monty Fall" version, where Monty doesn't know where the car is and just happens to open the door to the goat by accident, switching doesn't affect your odds.
So you might think that, since Monty doesn't know where the special goat is, him just happening to reveal the normal goat by accident wouldn't affect the odds. But Monty does know where the car is, and avoiding the car improves your odds of getting the goat, so it does matter.
(But yes, the odds end up working out the same - you have a 2/3 chance of getting the goat if you stay and a 1/3 if you switch.)
Yeah. The odds are still the same precisely *because* the same objects (the goats, collectively) are still fungible from the host's perspective, as in the original. And he's still avoiding showing you the car. Basically everything is the same because the host's knowledge and intentions are still the same, and that's really the only way information is injected into the system aside from the reveal, which is also essentially the same.
33. Therapists could work on commission like real estate agents: define a goal and agree on a price. Possibly pair with prediction markets to i) determine if the goal is met and ii) let the therapist sell action so they don't have to wait until the goal is achieved.
I thought #47, the Jensen Huang story sounded especially batshit because he (the CEO of Nvidia) is cousins with the CEO of AMD and was it the same aunt and uncle and where did they send her?
PSA: But apparently this is a common misconception. They're not related. Even the author of the book Chip Wars gets this wrong.
Sorry, they are related. They're not *first cousins* though. They're *first cousins, once removed*.
Re 20: I harbor no positive nor negative feelings for a16z's founder, although I have had to unfollow him on X because his constant politicized posting was ruining my timeline. However, institutional investors don't look at the S&P 500 as a benchmark for every strategy. The risk and return characteristics of venture capital funds makes them incomparable to the S&P 500:
1. The business model of venture capital is to have a few investments outperform by a very large margin, and most investments are expected to have negative IRR. The companies in the S&P 500 aren't expected to mostly go bankrupt in the next 10 years, in fact they're mostly expected to continue growing their bottom line at around 5-7% per year pretty much indefinitely. The fact that recent stock market performance was driven by a small number of tech stocks does not make the market as a whole behave like a venture capital fund.
2. Once a fund company reaches critical mass, which hinges on luck a lot more than most people realize (even more so for venture capital), its return is typically expected to go down as the opportunity set of high-growth companies to invest in gets smaller. You can "easily" deploy $100m into a ton of seemingly high quality startups, but the job gets multiples harder when you raise $1B for your next fund and you have to deploy most of it quickly (this falls into one of the many inefficiencies in capital allocation at large funds). The S&P 500 however doesn't suffer from its own size in the same way, its performance hinging more on the macroeconomics of the markets the companies operate in and the quality of management's execution of corporate strategy (which is to "maximize EPS" at the CEO level, since most incentive structures emphasize those sorts of metrics). Here again, comparing the returns between VC and the largest US companies is a mistake.
3. Another reality check more than a reason to defend a16z performance is that the types of large investors who put money into subsequent funds are likely already heavily invested in the US stock market and may want to get involved in venture capital for reasons that have absolutely nothing to do with "I want this investment to beat the stock market!". Typical reasons include: seeking diversification away from large caps and into small caps, embracing more risk for the hope of higher returns with the acceptance that higher returns aren't guaranteed by higher risk, creating powerful friends and expanding corporate access/network, making a more or less informed bet on certain technologies that VC fund x might be more inclined towards than VC fund y, etc.
4. Similarly to Private Equity funds, VC funds typically own a large amount of stock in private companies. This means the typical VC portfolio can "withstand" a prolonged drawdown in public equity markets by having its businesses be valuated more rarely (for example, only at the time of exits at a new valuation). LPs who invest in VC know this, and presumably they go along for the ride willingly. Keep in mind this is an entirely behavioral advantage; even if equity market values get "updated" 5 times a week and your typical VC company is valued maybe once a year or two, the underlying value of each company fluctuates by however much new investors are willing to meet selling investors, which can theoretically happen at any time so in that sense VC and PE companies aren't that much safer investments than public equity.
The better case to make against an investment in a16z funds since 2010 would be to go to each individual LP, ask them for their rationale for investing in the first place, and deconstructing the behavioral biases and co-mingling that led to this point. I can't do that, but I suspect if you do you may find that most investors were acting rather rationally based on their risk and return objectives, and were actively choosing VC investing knowingly giving up on the relative safety of the broader stock market. Could you have put all your money into an ETF that tracked the stock market for 0.2% fees per year? Sure! But filthy rich people and cash rich corporations don't put all their money into one basket, and they don't pursue the same goals with their investments.
I commend shachaf in #2 for including a necessarily detail that is often left out of incautious retellings of the Monty Hall problem: that Monty knows which door has the car. Unfortunately, they still left out another necessary detail, which is that Monty is REQUIRED to open a non-car door and give you a chance to switch. The standard solution to the original problem only goes through if this is stipulated. (If Monty is allowed to see what door you picked before deciding whether or not to open a door and let you switch, then for all you know, Monty might do it ONLY to the people who originally picked the car in order to psych them out, and force everyone else to stay with their incorrect first pick.)
But if we assume this works like the ordinary Monty Hall problem except for the special goat, then keeping your original pick has a 2/3 chance of getting the good goat, and switching has a 1/3 chance, so you should stay.
The easy way to see this is to note that, per the standard problem, switching should give a 2/3 chance of a car, so getting the goat has to be the inverse of that.
If you don't trust the easy way, you can break the problem down into 3 scenarios:
A. Your original pick is the good goat
B. Your original pick is the bad goat
C. Your original pick is the car
Initially, each of these has equal odds (by symmetry).
In scenario A, Monty HAS to reveal the bad goat (it's the only door that isn't the car and isn't your initial pick). So the fact that Monty DID reveal this does not change this scenario; it was inevitable.
In scenario B, Monty has to reveal the good goat. This didn't happen. Therefore we can't be in scenario B; its odds are reduced to zero.
In scenario C, Monty has a 50/50 chance to reveal either the good goat or the bad goat. This means this scenario had a 50% chance of being falsified, if we were in it. Thus the weight on this scenario is halved; i.e. half of the possible worlds in class C "died" when Monty revealed the bad goat, rather than the good goat.
1 chance of scenario A, plus 0 chance of scenario B, plus half chance of scenario C, combines to give us 2:0:1 odds, i.e. 2/3 chance scenario A, 1/3 chance scenario C. (Same answer as the "easy way" above.)
I believe your claim that “ride-sharing is a natural monopoly’ is completely wrong. I have been publishing analysis Uber since 2016 and have never seen any objective analysis demonstrating the type of powerful scale economies that natural monopolies need. Can you produce any evidence of this?
Urban car services existed for a century without any tendencies to high concentration, much less monopoly. Uber never had any Facebook-like Metcalf’s law type network effects where users highly valued the fact that other people used the app. Uber’s astronomical growth rate in its first decade produced $32 billion in losses. Just as there is no evidence of major scale economies, there is no objective evidence that Uber is more efficient than traditional taxis.
Uber was a purely predatory company. It used anti-competitive subsidies and its ability to sustain those $32 billion in losses to drive lower cost, more efficient competitors out of business. It only achieved breakeven after 15 years because (post-pandemic) it drove fares much higher and driver compensation much lower than they had been before Uber began operating. Those billions in subsidies—and Uber’s demonstrated ruthless behavior—destroyed any possibility that new market entry could discipline Uber’s ability to raise fares and impoverish drivers at will. Two journal articles that document why Uber’s economics meant that it could never operate profitably in competitive markets.
Will the Growth of Uber Increase Economic Welfare? 44 Transp. L.J., 33-105 (2017)
Uber's Path of Destruction, American Affairs, vol.3 no. 2, Summer 2019
"Uber was a purely predatory company. It used anti-competitive subsidies and its ability to sustain those $32 billion in losses to drive lower cost, more efficient competitors out of business."
I don't think this makes sense outside the context of natural monopoly. If you constantly need to burn money to put competitors out of business, and anyone can easily enter at any time, you'll always be burning money. My impression was that the goal was to burn money, put competitors out of business long enough to exploit a natural monopoly, and then raise rates. With the natural monopoly coming from the fact that the average rider wants to go with a big network because it will have the most drivers (and therefore shortest wait), and the average driver wants to go with a big network because it will have the most riders (and therefore most money).
1. You failed to reply to my request for independent analysis showing Uber was a "natural monopoly". You just repeated the assertion that it was. Would be happy to wager a considerable sum that you can't find any.
2. "Natural monopolies" have well understood economics, such as enormous scale economies (e.g. a high % of fixed costs so that the marginal cost of growth is very low). Uber had none of these features.
3. The claim that riders want to go with the network that has had the most drivers/shortest wait ignores the actual economics here. Riders wanted the network with the lowest fares which was the result if $32 billion in subsidies. Uber had more more drivers because of those same subsidies. If riders had to pay for the actual cost of their rides they would not have chosen Uber. No transportation has the Metcalf law subsidies you are claiming here. People don't fly Southwest or United because lots of other people do, they only care about the price and service offered to them.
4. Likewise drivers want the best compensation and conditions. They don't care about the size of the network. There's no natural correlation between the size of a transport company and the level of wages offered. Uber has driven (the already awful) driver compensation to below minimum wage levels in many places. Uber used anti-competitive power to destroy the normal workings of the driver market
5. Claiming that "golly its not rational for to invest in companies that don't have powerful competitive economics" suggests you haven't noticed the powerful forces that not only don't care whether "market competition" is maximizing overall economic welfare, but are fighting fiercely to undermine the few forces protecting "market competition"
Once Uber spends all that money on subisidies, can't a new competitor afterward enter the market and force them to spend more? It seems like Uber should never be able to get profitable enough to pay off those subsidies.
Uber's hyper-aggressive growth in its first decade was not designed to exploit scale economies and achieve lower unit costs than competitors (as Scott incorrectly argued). It was predatory behavior designed to drive more efficient competitors out of business and ruthlessly convince everyone that its domination was inevitable and future competitive (or legal or political or journalistic) challenges would be hopeless. Uber achieved enormous anti-competitive market power because it had $13 billion in investor funding by 2015. This was 2300X more than Amazon's pre-IPO funding. Amazon could fund most early growth out of positive cash flow because it had strong, legitimate efficiencies. Uber had no legitimate efficiencies and lost $32 billion. It only reached breakeven post-pandemic when it dramatically raised prices and cut back service. No new entry of any significance occurred because everyone knew that Uber would ruthlessly retaliate and everyone knew that no one in government would lift a finger to stop predatoty behavior designed to protect its artificial anti-competitive market power
But Uber has retreated from some markets? And in every city I've ever lived in, there has been at least 1 Uber competitor?
Athough, I think your point of "Urban car services existed for a century without any tendencies to high concentration, much less monopoly" is a very good one.
Why would it be "hopeless" if Uber will always lose money in the face of competition?
3. Actually, some people do fly with some airlines because they fly more routes and thus offer better connections. This is not the same as flying more passengers but it is strongly correlated to it (as having a large passenger market share helps serve many less-used routes with reasonably large planes.) Other people flying high usage direct routes on flexible tickets prefer airlines that have more departures per day as this makes flexibility much more valuable. This is also correlated to having more passengers.
As far as I can tell, Uber tried to do this but the "profit off a natural monopoly" part never happened and they had to give up.
The big difference between modern ride hail and traditional urban car services is that modern ride hail can connect a rider on one network to any car on that network, whether visible or not, while traditional urban taxi hailing either connected a rider (who didn’t have a network) to any visible car (regardless of network) or connected a rider to a car through pre-scheduling a ride. The modern ride hail has much more value to network effect than the traditional urban taxi services, so it has more claim to natural monopoly. (Though if drivers can all have two phones, one on Uber and one on Lyft, then the network effect disappears.)
You have absolutely no evidence demonstrating that this "network effect" had any huge impact, much less the $100 billion impact that Uber's investors were pursuing.
You have absolutely no evidence showing that any other transport service realized billion dollar impacts from because of their apps.
As your final comment suggests any impacts (positive or negative) depend much more on whether there is meaningful market competition than on any "technology" issue
Do you have any evidence that it did not have a huge impact?
The two journal articles cited in my original reply to Scott lay out Uber/urban car service economics in great detail
Why did it have more ability to sustain losses than its competitors? It's not like an existing car company that created a rideshare division (the way one created a self-driving car division).
>Those billions in subsidies—and Uber’s demonstrated ruthless behavior—destroyed any possibility that new market entry could discipline Uber’s ability to raise fares and impoverish drivers at will.
Did it? there exist Uber alternatives in medium to big cities. As long as drivers are not forced into exclusivity, and margins aren't too tight (in which case, who's being harmed?), a competing service can arrive.
In Poland there's Bolt, Uber, FreeNow and some old Taxi companies which added an app with a map and price. And many(most?) people have several apps installed and pick cheapest offer.
And m@ny(most?) drivers belong to more than one network.
I don't see natural monopoly here.
Main barrier to entry is lobbying/satisfying all the regulations.
Re: link 13. A person I was close to for many years went through a long pattern where she would receive some new mental health diagnosis, and rather than her life improving as she started receiving more effective treatment for it, she seemed to incorporate each of them into her identity, lower her own expectations of functionality, and become less and less capable of living a normal life. She went from being largely functional (able to hold down jobs, complete college courses, etc.) with some difficulty and occasional stumbling blocks, to being highly dysfunctional (not only being unable to hold down a job, but being asked not to come back to a volunteer position because she was so unreliable that they found it easier to plan around her never being there,) and eventually to the point of rarely ever leaving the house at all. She was continuously in therapy throughout, and claimed that it helped, but from the outside, it never appeared to in any way that I could tell. It appeared to me that every time she received a diagnosis, she would start associating with people online who'd made that diagnosis part of their identities, and take on board all their input about what to expect of herself and how to life her life, and invariably end up worse off for it.
One person who I talked to about this, a friend who I made after I lost touch with the first person, summed up her own experiences on the subject as "Yeah, the mental health community is super toxic."
In the beginning, when she picked up new diagnoses, I experienced a sense of relief; "Thank goodness she'll be able to get some assistance for this condition she's had all along." But eventually, I started to dread the prospect of how she'd respond to any new diagnoses.This certainly won't reflect everyone's experiences with mental health diagnoses, but I don't think she's entirely alone in those experiences either.
I believe Freddie de Boer has said something similar. Unfortunately, I can't quickly find the post where he said people are worse off as a result of having mental illness as an identity they cling to rather than seeing it as a problem to attempt to overcome.
https://freddiedeboer.substack.com/p/the-gentrification-of-disability
...or rather, that's one of the more-banger ones on the topic, there's a long tail of sundry and similar. I don't think he's done a The Basics summary post on that topic yet. Hopefully in the next book though!
Thanks, that post also links to the one FionnM links to.
"Defining yourself by dysfunction is a great way to stay dysfunctional."
https://freddiedeboer.substack.com/p/multiple-personality-disorder-probably
Anyone know how the different fertility rates in Turkey correlate with Kurdish population?
I'm curious if Turks have a distinctly different tfr than socioeconomically equivalent Kurds.
30. Seems like this ignores the elephant in the room: all of southeast Turkey has markedly higher fertility rates. This region, of course, contains provinces with Kurdish majorities. The Kurds may be about a fifth of the Turkish population right now (https://www.cia.gov/the-world-factbook/countries/turkey-turkiye/#people-and-society), but this could very well change in the future—and certainly comes with many domestic and foreign policy implications (see: Syria right now).
It's also the closest part of the Syrian Civil War - I bet if you could dig into stuff like childhood and disease mortality, you'd find them unusually high there compared to the rest of Turkey. I think the biggest driving forces in fertility reduction are major declines in childhood mortality and reliable contraception.
At least that would be in line with Kurds having higher fertility than the rest of the country in Turkey, Syria, and Iran, but lower fertility in Iraq. At least that is what I got from skimming this article. (Didn't read all of it, it's a much-more-than-you-wanted-to-know analysis.)
https://cfri-irak.com/en/article/the-recent-iraqi-demography-between-demographic-transition-and-ethno-confessional-differences-2022-06-27
> 1: Why running for Congress will ruin your life (unless you’re already rich). It costs ~$100K out of pocket before you get campaign funding, and you have to take a ~yearlong break from your career to campaign. If you win, you need to maintain two residences (one in DC, one in your district) on your $175K Congressional salary. Also, you have no power your first term, nobody will let you do anything, and you spend the whole time trying to get re-elected.
It doesn't have to be out of pocket. Self funders are mostly losers. If you can't get a few thousand people to pitch in a two figure amount or a few bigger backers then your chances of winning are low anyway. This guy might know some people who ran but I'll bet none of them got through.
This is really something that I find fascinating because this is a system that's very public and extremely important. But I guess no one really takes the time to look into it? It's weird. While I, as an engaged citizen, have limited influence I at least have more than an unengaged citizen. And it doesn't take that much to be engaged.
I've been thinking about why that is. And I think the reason is that there's little reason to do outreach qua outreach. If I know how to influence politics (and I do to some extent) then spending money and resources explaining that is probably a worse use of time than advocating for what I specifically want. And, like all knowledge, if you don't know something then you can't judge how trustworthy the other person is. Especially because politics is not predictable even for insiders. Also, there's absolutely a dyanmic where rich outsiders get promised results in exchange for cash and then it's just, "Darn, well, we only had a small shot anyway."
Anyway, are rationalists/EA ever going to politically organize effectively or is it just going to be scattershot campaigns that peter out as with that guy who ran for Congress?
What do you mean by "politically organize effectively"?
Organize in such a way they are able to enact their policy preferences and resist things against their preferences. I could get into specifics of how I think they should do it but there's various ways to do it and my way would just be a strategy not the one true strategy you would have to follow.
Maybe they already are out in California and I just missed it. But it doesn't seem like SF or really anything in California is being run according to tech/rationalist/EA principles let alone anything Federal. It seems like they tend to get an idea, raise a lot of money, then it fails and the infrastructure just kind of moves on to some other EA cause area rather than building durable political influence.
The one mild success I can think of is that they had was a few staffers were persuaded (they seem to be non-EA/rationalist types) on the AI Risk stuff. Which isn't that impressive. It's selling Democrats on regulation which they're already ideologically inclined toward. And from what I've seen it wasn't the AI risk people inserting staffers but instead just persuasion on pre-existing staffers. And not only was that a fairly limited success (non-binding even on Federal agencies) but it's about to get reversed.
I remember some Stanford conservative business school types that were poly/rationalist ranting about how they wanted to influence the Trump administration too and it struck me that they resorted to just putting out a general call on a podcast rather than... anything else. Not even the open auditions the Trump team held. It was weird.
I mean, we have various lobbyists and campaigns and so on, and there's a decent AI risk think tank infrastructure in DC. The main reason things aren't bigger is that SBF was more excited about that than anyone else, we let him do most of the work, I'm told he built a pretty impressive lobbying network, but then it obviously all collapsed and nobody would touch us for a while, except in the places where there was something that had zero connection to him whatsoever, which wasn't that many places (mostly AI).
Sure, SBF was an unforced error but a relatively subtle one. Subtle in the sense that it's an amateur mistake but understandable. Ironically, a movement with deep roots in startup world forgot about key person risk.
If I can be blunt: You have a motivated, rich base that's small but not tiny. The issue is it keeps on shooting itself in the foot. And this isn't a new movement at this point. Why is it so bad at this? Is it just it's mostly engineers so most don't have the talent/interest to do bog standard political stuff? Is it that a disproportionate number are recent immigrants and so disconnected from the American political system? Is it that all the high earning people prefer to work in tech (but isn't one of rationalism's differences that it's willing to pay tech level salaries for stuff like this)?
Or did they just unironically swallow leftist beliefs about how money buys success in politics and end up chasing a mirage? Was SBF successful because his parents had a political background so he knew the basics of things like bundling (which he didn't even do all that well)?
I'm genuinely curious, not trying to be mean. I could equally ask the same thing about Elon accepting a powerless commission. But the tech right is not what you have insight into.
Something better than the SBF-led campaign for that guy in Oregon. Apparently he didn’t check that the guy was a non-starter in the district because the other candidate in the primary had a local network! I assumed I had checked, because it seemed to me like the first sort of thing you would check before trying to get into a political race.
I can't really blame SBF for that one, he just ponied up the cash. It was a new constituency, so I think the idea there by the candidate (Carrick Flynn) was that he had a good chance.
But like you say, the first thing I did was (a) check out who the other candidates running were and (b) check out the demographics of the newly-carved out district.
Sure, he could expect to do okay with the university electorate, but a lot of the rest of the district was farming/forestry. And the other candidate had union ties to the farmers/foresters, and the rest of the slate were campaigning on local issues not "send me to Washington where I'll never come back here again and will spend all my time working on some big brain global issue and not fighting for higher wages and lower taxes for you".
That's precisely the failure mode of rationalist/EA campaigns once they get outside their little Bay Area bubble, and if that sounds unkind I'm sorry, but you can't just win by "hey, here's a great idea!", you have to show to the local voters why it will be a great idea *for them*. And Flynn's local roots just weren't strong enough - it was "I fecked off to the Big Smoke once I could get out of here" versus "I moved here, I'm well-in with all the unions, and I worked in local government here". Of course Salinas won.
Have you guys even tried getting a bunch of rich guys on board to fund a populist candidate who will take over the state/country and get all of your policies implemented?
It should be phrased "(unless you're already rich, currently hold or have recently held local office or have or can tap into a previously-existing financial and voter base)."
Treating "congressman" as an entry-level position is pretty wild; the starting point should be it's something you work your way up to through state politics/partisan involvement/extra-partisan involvement, but which a few people can (occasionally) force their way into with a siege-tower made of money. The people who think "I'm pretty nice and pretty smart, if most people saw how nice and smart I was, they'd definitely want me as their congressman" are likely to be hopelessly naive or loopy narcissists.
Yep, agreed. If you want to skip working your way up you need outstanding achievements in something else and that something else needs to translate well in specifically the voter base.
You can just walk into lower level positions though. There are entry level ones.
There are a number of "one issue, one term" candidates who won as representatives to the national government, but the problem then is that unless they manage to link up with the major parties, or have a good network, then they'll be isolated and never manage to achieve anything, so when re-election time comes up, they'll fade back into obscurity.
A somewhat successful example of this kind of campaigner is Martin Bell, who went from a career in the BBC as war correspondent to standing as an independent against Neil Hamilton (embroiled in scandal at the time) back in 1997 as "the man in the white suit":
https://en.wikipedia.org/wiki/Martin_Bell#Independent_politician
"On 7 April 1997, twenty-four days before that year's British general election, Bell announced that he was leaving the BBC to stand as an independent candidate in the Tatton constituency in Cheshire. Tatton was one of the safest Conservative seats in the country, where the sitting Conservative Member of Parliament, Neil Hamilton, was embroiled in sleaze allegations. Labour and the Liberal Democrats withdrew their candidates in Bell's favour in a plan masterminded by Alastair Campbell, Tony Blair's press secretary.
On 1 May 1997, Hamilton was trounced, and Bell was elected an MP with a majority of 11,077 votes – overturning a notional Conservative majority of over 22,000 in the 4th safest Conservative seat in the UK – and thus became the first successful independent parliamentary candidate since 1951."
He did try a second bite at the cherry but this time the big parties didn't play ball:
"In 2001, Bell stood as an independent candidate against another Conservative MP, Eric Pickles, in the "safe" Essex constituency of Brentwood and Ongar, where there were accusations that the local Conservative Association had been infiltrated by a Pentecostal church. In this election, Labour and the Liberal Democrats did not stand aside for him. Bell came second and reduced the Conservative majority from 9,690 to 2,821.
Having garnered nearly 32% of the votes and second place, Bell announced his retirement from politics, saying that "winning one and losing one is not a bad record for an amateur"."
The UK’s very different to the US at a federal level. A congressman has 700,000 constituents. A Martin Bell equivalent (Anderson Cooper?) might do well in the US running through a primary because of pre-existing name recognition, but someone like Richard Taylor (the Kidderminster Hospital MP) would have a much harder time. Your issue needs to resonate to too many people and not be co-opted by someone with a pre-built network.
> 18: Related: Sam Harris says he has been friends with Musk since 2008, but he noticed a sudden shift for the worse in his personality around 2020 which made it impossible to stay friends with him. He gives the example of Musk losing a bet with him that there would be 35,000+ COVID cases in the US, refusing to pay up, and launching personal attacks on Sam when asked to do so.
I've been drawing a lot of analogies between rationalist/tech spaces and the Technocratic movement a century ago. Normally I think of it more as a kind of structural repeat: same underlying forces leading to similar results. But Musk is, beat for beat, going through the exact same journey as specific industrialists in the period. It's weird to see it so close.
You might already know this, but his maternal grandfather was involved with the Technocratic movement (in addition to being a racist and a chiropractor, both of which make me not love him): https://en.wikipedia.org/wiki/Joshua_N._Haldeman
I did not, thanks. This reminds me with the number of former Soviet apologists (or in some cases spies) that are now pro-China. Guess there's something in the genes.
My favorite is Adam Tooze, the leftist public intellectual whose grandfather was one of the most notorious Soviet spies ever. And OK, we don't choose our relatives. But then Tooze dedicated his most famous book to his grandfather, then went on and on in the intro about the influence he had on Tooze's life..... Now, Tooze is one of those 'well I'm not saying I'm pro-China, butttttttt.....' types. What a coincidence!
https://en.wikipedia.org/wiki/Arthur_Wynn
His father also did some suspect things. Three generations, as they say.
One of the more frustrating realizations I had is how many people who either advocated directly against US interests or in some cases were outright traitors (either communist or fascist) simply ended up better off for it and paid no serious price. And how many of them, especially the leftist/communist ones, are still around and influential.
> 19: Ozy profiles George Perkins, an early 20th century businessman and reformer who thought that monopolies combined the best features of capitalism and socialism, and dreamed of an America where JP Morgan employed everyone with enough benefits to serve as a social safety net. Related: Weekly Anthropocene profiles Ozy.
This is kind of still a popular moderate liberal (think Matt Yglesias) position. The FDR vision is a bunch of oligopolistic companies that are heavily regulated/partnered with the government who use efficiencies and fat profit margins to subsidize unions and generous benefits for employees and replace cash rewards with prestige/promotions.
John Nye pointed out in "War, Wine and Taxes" that Britain had more state capacity in France because it concentrated brewing in some large firms that it could tax (and were willing to pay taxes to keep their monopolies).
Yeah, this is actually a fairly normal state of affairs. Though it has its own costs and I don't think they've seriously grappled with them.
...I feel like they're just reinventing feudalism.
If it makes you feel better their anti-small business pro-big business attitude is insanely unpopular.
I'm fairly sure the Trump administration is going to push for that kind of system anyways, simply due to the fact it gives them more top-down control over corporations. Also lets them keep their base happy by providing jobs and benefits to them while indirectly punishing dissenters.
Nah, the Republicans are highly reliant on small business support. And ideologically committed to it as well. If anything the Republicans have been positioning themselves as anti-big business and pro-small business even more strongly under Trump.
> bunch of oligopolistic companies that are heavily regulated/partnered with the government
This part is also a favorite in autocratic countries of all kinds. Giving the oil company to your cronies is a great way to prevent a rival power base from forming.
Of course, such companies are often very stagnant, but that is a price most autocrats are very willing to pay for stability.
You're right it's a favorite in many autocratic countries and that it often leads to stagnation. But there's also several democratic examples.
> I have the same question as this Twitter commenter - why is this even happening in Turkey, a country which I wouldn’t expect to be too plugged into Western cultural and political trends?
Turkey's extremely plugged into western cultural and political trends. The Turkish word for secularism is laiklik which is a direct borrowing of the French laïcité and first entered the Turkish vocabulary during the French Revolution. The first reforms in response to the trends sweeping Europe happened in 1792. So three years after the revolution started. It took less than a month for news to get to the Ottoman capital from Paris and it was normal to have it translated and published.
Also Turkey's birth rates declined long before this. The Ottoman Empire's population basically stagnated from 1700 to 1900. The Ottoman population in 1700 was about 27.5 million. In 1914 it was 25 million but they'd also lost some significant territories. But even including them it was only something like 33 million. Turkey's population in 1940 was 17 million and it became 85 million in 2020. So rather than the usual case of high fertility that decreased with modernity Turkey's fertility rate increased with modernity and is now returning to its premodern stagnation.
One issue with all this discourse is it just assumes the past was one global Tsarist Russia with high fertility fueled by a lot of rural peasantry. That was not the case.
Yes, also there are millions of Turkish immigrants living and working in Europe and Turkish Gastarbeiter have been in Germany since the 1960s, so of course Turkey is plugged in. Erdogan represents the part of the population desperate to stop the deeper trends toward secularization and devaluation of traditional male and female roles but he is failing, just as all these attempts to stem the influence of technology eventually fail.
I'm not sure I agree a victory is inevitable. But yeah, Turkey has far more European influence than either the Europeans or conservative Turks like to admit. Though in turn some liberal Turks overemphasize the commonalities. And the wider Turkic world in general is probably the most secular part of the Islamic world and has been for a while. Some of the stuff with Ataturk talking about how he's irreligious but surrounded by a bunch of pious Arabs who see him as a fellow believer could probably be written by some of the Turkish intelligence officials in Syria today.
> 32: China has abandoned “wolf warrior diplomacy” where they insult everyone for no reason. Seems like a smart move.
I think they abandoned it years ago. Roughly in 2022-23. There was a logic to it: it was basically burning diplomatic capital that China felt it didn't need for domestic wins. Now it thinks it needs that for other causes. We will see what happens after the moment passes. Maybe they learn that keeping some dry powder is useful. Maybe they will have new pressing needs.
> 34: Why does China, an advanced economy, have the tap water issues that we associate with developing countries? Maybe because Chinese people near-universally believe that drinking cold water makes you sick, so they all boil their water anyway, so there’s no incentive to have water that’s safe to drink without boiling. I notice there are many things like “Chinese think drinking cold water will make you sick” and “Koreans think you’ll die if you leave the fan on overnight” - is there any health belief that foreign countries make fun of Americans for? (I’m not looking for conspiracy theories about vaccines, more like something we all take for granted).
Wearing shoes indoors.
China is not an advanced economy. Even the Chinese government describes it as in process of modernizing. It also tries to present an image of itself as advanced in a way most of it isn't and a lot of those prestige projects come at the cost of more basic quality of life. Think of the Soviet army with its massive arsenal of nukes but an inability to consistently issue soldiers with socks. The attempt to say, "No, no, we just have a CULTURE where socks (I mean clean water) isn't important" is propaganda cope.
China's idea that it can technology its way out of what is basically a series of more basic and prosaic economic problems is the great hope of many economies in similar straits. And it's never worked. But maybe this time it will, we'll see.
Anyway, Common Prosperity (and no small amount of Xi's popularity AND unpopularity) comes from a furious program of building rural and poor city infrastructure. This involves taking money from cities and coastal areas and investing it in things like making sure remote villages have electricity. He thinks this will also boost economic growth for reasons a standard issue social democrat would agree with. Except it doesn't. But Xi can't be wrong so they have to figure out how to expand that infrastructure and still hit fairly aggressive growth targets. This puts a lot of pressure on lower ranking members and also usually means more demands made of the relatively wealthy business and professional classes. But also it leads to a large amount of cut corners.
Still, it's a net improvement and a large amount of what's fueling support for Xi and nationalism. Xi is, whatever his other flaws, not personally corrupt. And people will forgive a sincere zealot more easily than a corrupt go along to get along type (and Xi's the former). And he's had a program that has improved the living standards of the lower classes. He's combined this with nationalist rhetoric which is always popular at such moments of transition but I don't think he's cynically exploiting it. Instead I think he's genuinely a nationalist who has got many common people on board with the program.
That's certainly what China believes. It's also what Turkey believed. And South Korea believed. And Japan believed. And Mexico believed. It didn't work for any of them. Maybe it'll work for China. If you look at South Korea in particular, who probably broke out of the problems the best, technology was only a part of wider structural reforms that China doesn't look to be willing to stomach.
Also China has about 4 million IT sector jobs vs 5 million in the US. And while they're not as well paid as in the US they're pretty highly paid. But in both countries that's not a huge part of the work force.
Anyway, ignore I said all this. I very much want Uncle Xi and Uncle Sam to get into a bidding war over who can pump more money into my industry. It's very important and will definitely solve everything.
Turkey and Mexico both bet on specific industries with varying levels of success. Mexico ended up growing mostly from integration with the US but suffered issues more related to human capital than technology. They are key to many advanced US supply chains, for example. Turkey also did something similar to Europe but, for basically military/political reasons, invested more in domestic production and as a result they're significantly ahead of East Asian advanced economies (and even further ahead of China) in certain things.
Technology isn't unimportant and you're right that productivity is part of the puzzle. But it's not complete and it's not the only necessary driver of productivity.
Xi has said (in so many words) he thinks the issue was too generous welfare, not a lack of technological progress, in those countries. So he thinks that they can outwork the problem. Which is a fairly typical response for that economic model honestly.
China isn't more successful than Mexico. They're about equal.
I agree they have oversized welfare states. I don't think that it's entirely a story of too much government though. In particular, I tend to emphasize the inability to upgrade mid and low end human capital (which is education) and uneven infrastructure as part of the issue. And I don't think outworking the problem is a real solution. There's only so many hours in the day and you will start to see fatigue as growth slows (as mathematically it must) so the outsized rewards to working diminish.
There's also the consumption issue. China has chronically low consumption which is fairly typical for its economic model but hypercharged in China. The economy is also significantly more government and state owned enterprise controlled. Mexico has significantly more per capita consumption because it has a more normal looking economy.
And the debt issue. And the financial system weakening. And so on.
The Chinese economy needs significant structural reforms and cooperation from its trade partners (which is mostly the US and its friends). I think they're hoping technology is a get out of jail free card. But there's plenty of examples of economies that were innovative but that were not broadly successful.
(Also, China does have significant brain drain.)
> Xi is, whatever his other flaws, not personally corrupt.
That's an interesting claim. What made you think so? I mean, being corrupt is like the default assumption for dictators.
Well, the real answer is we don't fully know. And he's probably at least some corrupt. But compared to his predecessors and contemporaries he appears to be living a less lavish lifestyle, has excluded his family from positions more, that kind of thing. So it at least gestures in that direction.
> I mean, being corrupt is like the default assumption for politicians.
Fixed it. ;)
He's definitely corrupt. His family was worth a billion dollars or more, and that was *before* he ascended the throne. Xi has periodically retaliated against Western media reporting on his family wealth, and some of them, like Bloomberg, knuckled under. In addition to them getting much better at hiding their wealth, that's part of why you don't hear much about it.
> His family was worth a billion dollars or more, and that was *before* he ascended the throne.
I think the point is that he isn't a parasitic presence on the country he's ruling, which is what people usually think of as "corrupt".
Well, to be clear, we're grading on a curve here. But also: "his family" is mostly his brother in law who made his money in real estate. Of the estimated roughly a billion (I've often seen $750 million quoted) he's at least 300 million and maybe up to 600 million of it. He's a fairly typical story of a red prince who went into business during the boom. That's about half of the almost a billion his family is worth and most of it happened before Xi was in a position to hand out favors. (His brother in law's parents, and Xi's own father, were in a position to help through things like government favors and smoothing regulation of course.)
That still leaves a nine figure amount though. Now, how does someone with an official salary of $22,000 a year have between about a dozen relatives tens to hundreds of millions of dollars? How does his daughter afford Harvard tuition and a nice, trendy apartment? It's not his book sales which officially go to the party. But also he doesn't have Putin style palaces or if he does no one's discovered them. (If you do have some report on that I'd love to see it and update. Also I like to see lavish corrupt palaces in general.)
Additionally, during the height of the anti-corruption campaign, Xi personally led the investigation into his own family and while he found that they had done nothing wrong (of course) he also found some of them had done things that might be perceived as wrong and ordered them to cease various activities. Which might be performative but the guy we have the most visibility into (his brother in law who has significant business abroad) really did sell off a lot of conflicts of interest.
Maybe I'm wrong here. But Xi strikes me as a true believer, not as someone who's grabbing what he can or living a lavish lifestyle.
China is not *not* an advanced economy. "China" barely exists, and is much more decentralized than many in the west assume. Some places in China are 3rd world. Most places in China are 2nd world. A large minority of places are 1st world, with a few (the tier 1 coastal cities) arguably 0th world, in league with other "0th World" places like Tokyo and Singapore. There's an America-sized group of people in China at American (or above) living standards, and a billion people at varying levels below that.
(and yes, Tokyo and Singapore are 0th world. If NYC is 1st World...well, Tokyo and Singapore definitely aren't that.)
What year was that? My impression is that cities like Shanghai have more or less caught up, and the rest of China has grown substantially.
In NYC, no one has cars, people live in 100 year old tenement buildings, and they have to go to laundromats too (hanging clothes outside seems more a cultural thing than a wealth thing. The Japanese can def afford dryers, only a few weird ones buy them).
>having to walk/take the train and hang your clothes on a clothesline
if that's poverty (speaking as a certified poor person) it doesn't seem that bad. Definitely not worth i.e. slashing the welfare state or letting inequality skyrocket to fight.
Poverty in the US felt way worse, one wrong move or accident and you'd be out on the street
Having parts that are advanced and parts that are not advanced is just being not advanced. Many countries have a single advanced, global city that can compete with the best of the first world. China has more than one because of size. But that's the normal dynamic.
There's an American sized number of Chinese people who are living in conditions that range roughly from places like Poland or Romania on the low end to South Korea on the high end. And there's about a billion people living in what are somewhere on the border of third and second world conditions, basically more like Ukraine or Uzbekistan. And a few really rural areas that are poorer than that but are marginal.
The people who live better lifestyles than New York do so because they are elites even within their context. There's very little in terms of bleeding edge technology or even diffusion of that technology that you can't find in first tier American cities. But Singapore or Shanghai benefit from a large number of relatively low paid workers which create conveniences for professional class people that their American equivalents don't have. This is a real benefit but it's also one that's characteristic of a less advanced economy because it means there's a lot of cheap labor floating around. Also, they benefit from an East Asian cultural preference for urban density that creates agglomeration effects.
That isn't to say it's fake. It's not. A software engineer in Shanghai or Singapore really does have access to conveniences and amenities that one in New York City doesn't. But that's not because China's advanced. It's because it's got a bigger gap between rich and poor and that benefits people on the top half of that divide. In effect, if you're a professional, China (and Singapore et al) are the best tradeoff of being modern while still having cheap labor. If you like cities anyway. But you can also get that in places like Bangkok where the nation is more obviously not advanced but has one big international city.
To take a simple example, food delivery is better over there not because their apps are more advanced but because they have more drivers and chefs who are paid less to the point that ordering out is significantly cheaper.
I assure you the reason why Shenzhen, Tokyo, and Singapore are nicer than NYC has very little to do with the availability of cheap labor. Have you been to Manhattan recently? The place is turning into a total dump. The MTA is filled with trash and sketchy people and looks like a dungeon. There are random people just hawking fake gucci bags on the street (which often smells like piss). Tokyo and Singapore are, suffice it to say, not like this. And these "conveniences" are available to everyone, not just elite software engineers.
Bangkok (and Thailand as a whole) may be a better example of this "advanced in parts but not advanced" that you describe. I would agree Thailand is "not an advanced economy" even though there are parts of Bangkok that feel 1st or 0th world. Most of the city is a total dump, and when you leave it, it's even dumpier. There are no major internationally competitive Thai firms either.
China is different here. Unlike Thailand, or Romania or Ukraine, China has dozens of these fairly nice cities. Even more than their relative size would indicate (see India for an example where's there's really only a few). Pick a random tier 2 city and search for "4k walking/driving tour" on Youtube. These places are more developed than you may realize (nicer than Bangkok, at least). And unlike Poland, Ukraine, Romania, or Uzbekistan - these cities are the home to genuinely globally competitive advanced firms. Deepseek is catching up in AI, despite GPU export restrictions. DJI owns the consumer drone market. Tencent, Alibaba, Xiaomi, Huawei, and Baidu go toe-to-toe with FAANG in many areas. Bytedance (and now Xiaohongshu lol) are beating American firms in social media to the extent the Feds are freaking out and trying to ban them. CATL increasingly owns the battery industry. BYD is outselling Tesla. Chinese solar companies have lowered costs so much that we've slapped triple-digit percent tariffs on them (and on BYD too, btw).
To say this doesn't constitute an "advanced economy" because some parts are substantially less advanced is pure, unadulterated, COPIUM.
“Worth less” is more a function of their capital markets being sketchy, to say the least. Tencent even with that has a ~500B valuation.
If you really wanna argue that IBM and HP are really, truly, worth more than Alibaba and Baidu…I’m not sure what to say. IBM??
It’s not that China is richer than the US, or that they work fewer hours, or that they don’t have their own problems. But they’re catching up, *very* quickly, and the West has its head in the sand and is coping via narratives 20 years out of date.
Yes. The crime problem is definitely worse in NYC. Though I will say there are places in first tier Chinese cities where you can find hawkers and pickpockets and the like. They just operate more quietly. And there's plenty of dumps in parts of Chinese cities. Cramped tiny apartments and the like. Not a lot of crumbling infrastructure because it's mostly new but some bad construction that will degrade in coming years.
Also, on a per capita basis, most of those nations come out ahead. Having even a single city comparable with a single Chinese city puts them ahead per capita because China has so many capita and most nations have so few. China's size is a definite advantage. India, which is far behind economically, is not a fair comparison. But also, interestingly, India's investment model meant to stimulate high tech industries has also kind of worked and they produced tech firms and competitive engineers at a much higher rate than comparable (that is, low income) countries. Which would be Nigeria, not China.
China does invest a lot in infrastructure. But this is the classic move: you've now moved on from praising China to putting down other countries. And in this you're incorrect. There's plenty of advanced industry in those countries and internationally competitive firms. Even Ukraine has some things it can do that China's been struggling with for decades. China does lead them in electronics (which is where all of your examples come from). But that's a specific sector and, again, it's common for countries to have dominance in specific sectors. Thailand, for example, does a lot of pretty high end luxury manufacturing.
You're also exaggerating China's lead and achievements even in that space. So by subtly pushing China up and everyone else down you construct a world where China is further ahead than it is.
But you typed COPIUM in all caps after a string of adjectives. So I guess you're right.
I say copium because I think your view of China is accurate as of 2000, but not accurate in 2025.
And I think westerners in general are failing to update on the amount of growth they’ve had in the last 20 years.
(Also, the issue in NYC is not a “crime” problem per se. More a “the city is covered in trash and grime and has obviously deteriorating infrastructure” problem)
My view of China is based on up to date information, qualitative and quantitative, from both China and abroad. And what I've said doesn't resemble the China of 2000 at all when it had a GDP per capita of less than $1,000 and its electronics market share was about 4%.
Also western knowledge of China is stuck in pre-covid times since there was a great deal of hostility to foreigners at that time and most of them left and few returned. Which actually means they tend to assume China is growing faster than it is since GDP growth and technological breakthroughs were faster back then.
Interestingly, I have the same impression about how Poland is trotted out in this discussion. People carry around an image from 2000.
Go there. The nation is much more developed than 25 years ago. Warsaw is quickly becoming a skyscraper metropolis, and other cities are keeping pace.
The idea of "0th world" is silly. The impressive parts of Asian countries almost always amount to fancy shopping malls.
I implore you to go to Tokyo or Singapore and just walk around. No. It’s more than just fancy shopping malls.
I've been to Tokyo, and I'm curious what you're referring to. Certainly the trains are much better than anything in the US, but that's just path dependence + density. Other than that, I'm at a loss for what you could be thinking of.
An America sized group of people at or above American standards PLUS a billion people at a lower but still not that destitute (Vietnam?) level would require much higher PPP GDP for China relatively to the US
Not necessarily true if you assume that living standards don’t perfectly correlate with GDP. Japan PPP Per capita GDP is much lower than the US but I’d argue living standards are comparable if not better in ways.
Trains are better - hell, Infrastructure is better in general. The average restaurant or store feels nicer, service is better. The streets are cleaner. The feeling of safety is much better. Few homeless or junkies - saw ZERO used needles on the ground. More convenient amenities everywhere.
This isn’t a density thing either - I’m comparing to NYC and Boston.
> 45: The Right Looks For Converts, The Left Looks For Traitors. There’s not much in this post beyond a natural expansion of the title, but it’s a snappy phrase, and matches my observation of the past ten years with friends and contacts on both sides. But I found myself thinking about it now because, for the first time in ten years, it no longer seems to be true - the Right has gotten much more into looking for traitors (I have yet to see leftists looking for converts, but anything can happen!), and I’m getting more harassment, illiberalism, and purity testing from the right part of the blogosphere than the left. I still basically believe the Barberpole Theory Of Fashion that cool people optimize their signals to separate themselves from the most obvious group of uncool annoying people in their vicinity; for a long time, that’s been SJWs and the Right has benefited, but I predict this has begun the very long process of changing (cf. Richard Hanania’s political course).
I think it's simpler than this. The US is a semi-dominant party system. The first American party system was the administration party (who supported Washington) and the opposition (who opposed him). Each party system has a dominant party and an opposition party. Since 1933 the Democrats have been the dominant party and Republicans have been the opposition. The dominant party is always marked more by internal fights since it can assume that it will mostly be in charge so what's important is what they do while they're in charge. The opposition party is more opportunistic because it can define itself as opposed to the dominant party. If you want to see this the other way the Republicans were the dominant party from 1861-1933 which is why the Democratic coalition included both socialists and the KKK.
We're probably entering a new party system but the Democrats are still acting like they're the dominant party and the Republicans like they're the opposition. Which is probably still the default unless the new party system makes the Republicans dominant.
I think it's too early to tell if there's a new party system. I recall agnostic of the Akinokure blog (now "Face to Face"?) claiming there was a cycle to transition to new party systems and that was why Biden couldn't beat Trump in 2020... He stopped letting me comment about how he was wrong, so I haven't kept up with him in a while.
I think it's widely agreed we're in a dealignment period. Basically a transition between party systems. In fact I can't think of anyone who disagrees with that.
I do not think anyone has a firm idea of what the transition is to or what the results will be. A system transition does not necessarily mean the previous dominant party loses power. But it does mean radically different coalitions and politics.
I can see possibilities and some possibilities I'm pretty sure won't happen. But that leaves far too much latitude for such absolute "this will happen" claim.
How about this hypothesis: the change we're seeing is specific to Trump. There won't be some other Republican able to capture his "magic".
And then what happens? Both parties return to the 1990s positions and coalitions?
I think you mean this as a rhetorical question, in which case I agree with you.
Parties change, but they don't change *back*. Whatever the post-Trump system looks like, it won't be something we've seen before.
Yes. A less rhetorical way of putting it is: Even if the Democrats put humpty dumpty back together and remain as the dominant party it will be with different politics and a different coalition. So the system will shift.
Remains to be seen.
How does this theory comport with 1980 to 2008? Reagan won by a landslide in 1984 and Bill Clinton governed very far to the right of typical Democrats, and still lost Congress. After Clinton Bush 2 was even further to the right on religion and nationalism.
I could get behind a theory with much shorter timeframes (15-30 years), but the full run of 1933-2025 has counters to the theory.
It's usually put as like 1933-1994 at which point we entered a period of de-alignment that's lasted 1990 to today. But the new system hasn't really solidified yet. Though there's a lot of disagreement around the edges.
I think that if your model for politics can't really account for the last thirty-five years then it's not a particularly useful model.
I don't think that "dominant party" vs "opposition party" is especially useful.
It's not that it can't account for it. It's that it sees it as a transition between systems. That these transitions take decades is normal but you usually include it as part of the previous system instead of the succeeding one.
45: If you think think of Jonathan Haidt's Moral Foundations theory, conservatives tend to place a greater emphasis on group loyalty, so you would imagine they'd be the ones more apt to engage in heretic hunting. I can't help but point out that people on the right have been bashing one another as RINOs and "cuckservatives" for roughly as long as I've been alive.
On the other hand, if you're an out and out collectivist, you would need to police your ingroup more closely for free riders and defectors who could undermine your group's solidarity. And I bet Robin Hanson would say that whatever politics you subscribe to, attacking someone for insufficient fealty would be likely to raise your status within your group. Richard Hanania would probably point out that progressives just care more about politics in general and thus are likely to expend more energy trying to enforce norms on each other. So...maybe.
34: here's one possibility: a lot of people seem to think that being cold for a period of time makes you more likely to get sick. You might "catch a chill" as they say.
45 - Conservatives in the 1950s and again in the 1980s seemed much more interested in driving conformity than they are today. The 1950s broke out into the counterculture of the 60s when the left was more open to different viewpoints and gained support. You can also look at related ideas - the left was very much about free speech when they were the outgroup and the right controlled powerful institutions.
It seems correct to me that when a political group is weak they are open to lowering standards for alliances but when they get strong they start the purges of anyone not sufficiently supportive of what they consider core goals.
Woke was definitely "more accepting" of differences in the early years (and wanted free speech, etc.), and then switched to ideological purges and censorship when they felt like they were in charge.
Yes, good point about weak vs strong positions. I was kinda thinking in 'all else equal' terms, but whether your group happens to be the underdog or the uberhund is probably always the more important factor.
But you could also argue that heretic hunting is a sign of weak group loyalty, because you're attacking people within your own group. And then use that to explain heretic hunting on the left, which is also a big thing.
Maybe heretic hunting is how you strengthen group cohesion, though, or at least maintain a certain level of it. You trigger a shame reaction, which gets people to adopt the group's beliefs as their own in spite of whatever their own inclinations might be, or maybe you pick out a few loud dissenters and burn them at the stake or whatever, and everyone else is too intimidated to do anything but go along with the group.
That's like arguing that the body killing cancer cells is self-destructive. You need some method of ensuring that the collective will is maintained.
I guess it's a direct proxy of low asabiyyah? Low asabiyyah is usually correlated with success, so whenever someone gets a slight upper hand, witch hunting it is.
> How would one design a version of this system which avoided this failure mode?
Third-party neutral arbiters with payment held in escrow, but that's way more trouble than it's worth.
Aww, thank you! You have a beautiful family as well, I'm sorry my toddler tried to run over your kids in a wheeled cart.
28: This premise seems just obviously false to me. Perhaps adults are ON AVERAGE more skilled than teens in every way, but it is not the case that every adult is better in every way than every teen. Some teens have some advantages over some adults.
The naive economic model predicts that, anywhere a teen is employed, there was no adult that was BOTH a better potential employee AND willing to do the work (for the same wage). The fact that many teens successfully find jobs does not contradict this model in any way that I see. It just means that each individual local adult was either already employed, holding out for a better job, or defective in some way relative to the teen--which seems pretty plausible on my general world model.
I do think the claim "human labor will not be worthless" is still technically literally true--the naive economic model just predicts that human wages will fall until they are competitive with AI. As long as the AI costs more than "literally zero", then there is some other "also not literally zero" wage where paying a human to do the job costs the same amount per unit of work performed. (Though it makes no guarantee that this will be enough for the human to survive on; it could be "one cent per decade" or something.)
> The naive economic model predicts that, anywhere a teen is employed, there was no adult that was BOTH a better potential employee AND willing to do the work (for the same wage).
"For the same wage" is the important point. If teens are worse at everything, they still do whatever they have a comparative advantage in, and make less money.
The issue here is that with cheap, smart AI, the market price for a human will likely be below subsistence. Also, it's likely that there would be costs involved in hiring them that are more than they're worth. Like imagine the cost of installing ramps is twice that of hiring an able-bodied employee. You could hire someone in a wheelchair, but it's only worth it if they pay you.
"below subsistence"
One of the open questions mentioned is whether massive abundance enabled by machines doing most/all human labor will make it worthwhile to grab even a tiny slice of the very large pie.
(I.e., if I'm making $1T/day, inflation-adjusted, I can afford to pay workers $10,000/day and not notice the cost. I'm way better off, sure, but compared to even a worker making $400k/yr. today, they'd be nominally better off under this system.)
Would something like this actually happen, though? Not in a frictionless, spherical world where everyone is a perfectly 'rational' economic actor. But the world we currently live in has a major movement of people trying to 'buy local' to support businesses in their communities, despite the fact that supermarkets, Walmart, and Amazon long ago defeated that business model. Same with 'hand crafted' versus machined. People pay for imperfection sometimes. There's a perceived economic benefit to paying a nominal price to feel more ethical, and we've observed this in the marketplace today. So long as humans remain economic actors, we should expect this trend to continue.
If it 'costs a little more, but gives people much-needed employment', it's hard to imagine an expanded class of uber-wealthy people choosing NOT to spend $10,000/day hiring people to dig holes and fill them back up again, if nothing else.
Seems like you're mixing together "maybe the economy is so productive that humans could be paid 1000x less than robots and still make a living" with "maybe the rich will give charity to the poor". Those are both interesting ideas, but I think it would improve clarity to separate them out.
I'd tentatively be pretty happy with a system on the order of "tax automation to pay for UBI", but I'd still consider that pretty different from human labor actually having value.
To some extent, humans + machines (and technically other forms of capital/accumulated human discoveries) are already so productive they can be paid 1000x more than humans alone, in say 5,000 BC.
To some extent, both ethical scruples and charitable giving have skewed markets away from some theoretical worst-case scenario.
Also to some extent, most advanced societies have some form of government-based wealth redistribution program, if not many.
I imagine all these scenarios expanding as AI becomes more capable.
I think there's a very big difference between "humans are paid 1000x less than humans+machines" and "humans are paid 1000x less than machines alone".
I'm not advocating the labor theory of value. I'm saying that in economic activity writ large I don't see humans completely excluded from the market just because productivity increases accelerate.
But you're still competing against robots. No matter how massive the abundance is, if it's cheaper to build robot workers than feed humans, it's not worth it to feed humans.
> There's a perceived economic benefit to paying a nominal price to feel more ethical, and we've observed this in the marketplace today.
Yes, we can all live happily as a result of charity. If we program robots to care about all humans, or care about specific humans that care about all humans, that works out well. But paper clippers would see no reason to keep us around.
There's also the possibility that AI that cares about ethics will conclude that it's better to kill one human and use the resources to support trillions of super happy AIs, but I'm with the AI on that one. I just want to maximize happiness, not make sure the people who are alive now are happy at any cost.
I think we're arguing from different scenarios. In your scenario, computers take over the world. In that case, sure, they'll probably get rid of humans once they have no more need of programmers and repairmen.
In a non-singularity environment, where economic activity is preserved but distorted by the availability of unlimited intelligence, I'm arguing that there's historical precedent to believe market forces will continue to value human effort - even in a world where AI can drive, paint, write, and do everything a normal human can and then some.
I'm NOT arguing that human labor will be dominant, or even competitive.
This is all predicated on the idea that AI workers will be significantly cheaper than human workers. Robots are technologically advanced and require significant amounts of advanced components, metal and other resources, electricity, and maintenance. There's going to be a pretty high base cost to general robot labor. Right now machines used to replace labor (automation at factories, for instance) gain benefit from highly specializing in repeated tasks and extremely high volumes.
A general robot babysitting kids, mowing lawns, or doing other manual tasks is going to have very little advantage (and after robot-specific problems may actually be far worse) over humans doing those tasks. LLMs and current ideas about AI labor don't involve general tasks, but specifically the kinds of tasks that humans do with minimal physical action. Writing an email, setting up a meeting, reading Wikipedia.
There will not be a time when humans are expensive and robots are cheap, unless there's a complete change in how our understanding of physics works. There will be downward pressure on human wages, and in many cases humans will no longer be able to do certain work economically. But that's very far from a general "humans can't do anything economically compared to robots." I can't rule out that humans end up doing most physical labor while robots do most or even all of the intellectual labor. That robots will do all of the physical labor just doesn't make sense.
> A general robot babysitting kids, mowing lawns, or doing other manual tasks is going to have very little advantage (and after robot-specific problems may actually be far worse) over humans doing those tasks.
Many parents would pay a big premium for a robot which could babysit their kids as well as a human. And robots already do an enormous amount of lawnmowing.
"As well as a human" is a bit of an interesting question here. What would it take for a robot to have the social and physical characteristics of a living, breathing human? That a human child could bond with and grow healthy social interactions generalizable to later human to human bonding?
There's a pretty high base cost for human labor, too. It takes well over a decade of continuous labor to produce a fully-functional human laborer. Once they're mature, there are still major ongoing costs for housing, food, and medical care.
This is disguised because typically you RENT a human laborer instead of buying them. But we can do that with robots, too, if it makes economic sense to do so.
It's true that cracking intelligence is not the same as cracking robotics, so there might be an exploitable gap there. On the other hand, we already use robots for lots of labor, AGI is likely to greatly accelerate R&D, and robotics will only get better, not worse. And just as humans provide an existence proof for what's possible for brains, they also provide an existence proof for what's possible for bodies.
>AGI is likely to greatly accelerate R&D, and robotics will only get better, not worse. And just as humans provide an existence proof for what's possible for brains, they also provide an existence proof for what's possible for bodies.
Agreed. And specialized machinery to _build_ robots by the millions will reduce per-robot costs (with tradeoffs on how much one pushes optimizations improving the design of robot bodies vs how much one optimizes specialized machinery to manufacture _fixed_ designs for robot bodies).
Yes, but we value humans qua humans. We would want humans to exist even if they were neutral in terms of productivity, and in a society with predictable excess we value humans even if they are net losses in productivity.
So there's a built in buffer for how productive a human needs to be per cost. That is, we'll take a loss in some areas to support a human that we would never accept or even consider for a robot.
It's the same reason a business owner might hire his less-than-competent nephew even at the same or higher wage than an outsider who already possesses the desired skillset. We pay extra to gain in ways outside of direct productivity.
Seems like you have switched from an argument (in your previous comment) that humans will be economically competitive for some tasks, to a new argument that humans will survive because of charity?
Not charity, a lower bar.
Historical humans had to be, on average, at least as productive as their costs. If you didn't bring in enough food to feed yourself, then you died. Excess production improved your living conditions - better house, larger family, better fed.
But tools and machines freed us from a lot of the natural constraints. We grew wealthy enough that large numbers of humans could live while a few did the basics. "Civilization" arose from this process, but it continued in big leaps. We went from a large majority of the population working to get food to less than 5%. We don't think of, say, librarians, as living off of charity, even though they do not produce any of the basic necessities of life.
The way that we value librarians is one sense of what I mean if/when robots are producing our basic necessities. We would continue to pay for librarians for as long as they provide some sense of value, even if that value is detached from survival and necessity. We cannot say that a librarian is or is not economically productive, because the position exists for other reasons. To extend this to a post-robot world, we may value childcare/eldercare from actual humans in the same way. Something that has an intrinsic value that can't be compared to GDP but we want to continue anyway.
The second sense I mean is that a robot would need to produce in excess of its costs, or we would not produce the robot. Even if a company could exactly balance the cost of automating a function with the benefits, they would not choose to do it. We would, though, be okay balancing the cost of a human life with the benefits, even while being economically conscious. We value human life separate from whether such a life has a positive ROI (though we didn't want a negative ROI), and as with the non-farming jobs that we still valued when farming was 90%+ of the population and food was still often scarce.
Speaking of Musk, there are lots of funny pairs of photos online of mild-mannered 30-something Jeff Bezos and bad-ass 50-something Jeff Bezos.
Would be nice if you could add a link to #33. https://chrislakin.com/bounty or https://x.com/ChrisChipMonk/status/1873101046199038301 works
Done.
The answer to #29 on reduced fertility is that you need to split the drop in fertility into two parts.
The first part ended around 1960 or so in the West. This was largely about fewer children per couple (huge drop from around 8 to 3, or the "2.5" used for comedic effect when I was a kid).
The second part from 1960 to today is more complicated. There seems to be a small additional drop in children per couple that largely is due to delay in the first pregnancy. Some of it is also from a big reduction in teenage mothers (this is the group for whom the fertility rate has dropped the most). I suspect confounding between this and reduction in marriage, especially in the black population. Black women have a much lower rate of marriage today than in 1960, but they also have a much lower rate of teen pregnancy. The former did not cause the latter.
I believe in the U.S. that the black total fertility rate recently dipped below the non-Hispanic white TFR.
My impression is a lot of the decline in fertility since the Housing Bubble has been among the lower half of society. They've become more careful the way upper half did long before them.
Yes, it took longer for young motherhood to stop being a source of status for low-income people.
It was never a source of status to be a young *unwed* mother for the upper 4/5ths socioeconomically, but for a while (I remember studies of "welfare moms" from the 70s-80s) for the lowest 1/5th it was a source of status and sense of accomplishment, and adequate money without work requirements.
The biggest TFR drop for blacks occurred in the 1990s (Clinton era welfare reforms and social shaming for teen moms probably both played a role).
https://www.cdc.gov/nchs/data/hus/2020-2021/brth.pdf
https://www.cdc.gov/nchs/data/vsrr/vsrr035.pdf
Most of the change in attitudes/norms had already happened before the Housing Bubble, but that and the Great Recession that followed probably cemented the changes. In any case, now it appears self-sustaining. There is strong social reinforcement of birth control and abortions for young women of all incomes and ethnicities in order to live a more unfettered, individualistic lifestyle, and pursue education. Hispanics are a partial exception.
On 18: I’ve only ever worked 100hr weeks for a few months at a time, but Musk always read to me as work drunk. Just trying to keep things moving, doesn’t have the stabilizing personal life, and just on task all the time.
Then he had the personal issues with his son “becoming” trans and I think his appetite for subtlety went away.
I also see the Sam bet through Elons eyes, or what I imagine are his eyes. Here’s a guy he’s friendly with, reminding him he needs to fork over a million in cash, while he’s fighting to keep his companies going.
In short: he’s work drunk, over-optimized to keep his business going and all the stuff in the periphery is just sort of falling off. My heart breaks for him a little bit but I don’t think he wants to slow down.
>I also see the Sam bet through Elons eyes, or what I imagine are his eyes. Here’s a guy he’s friendly with, reminding him he needs to fork over a million in cash, while he’s fighting to keep his companies going.
Elon insisted on the bet even though Sam thought it was ridiculous (quoted from Sam's post below):
Elon’s response was, I believe, the first discordant note ever struck in our friendship:
> Elon: Sam, you of all people should not be concerned about this.
He included a link to a page on the CDC website, indicating that Covid was not even among the top 100 causes of death in the United States. This was a patently silly point to make in the first days of a pandemic.
We continued exchanging texts for at least two hours. If I hadn’t known that I was communicating with Elon Musk, I would have thought I was debating someone who lacked any understanding of basic scientific and mathematical concepts, like exponential curves.
Elon and I didn’t converge on a common view of epidemiology over the course of those two hours, but we hit upon a fun compromise: A wager. Elon bet me $1 million dollars (to be given to charity) against a bottle of fancy tequila ($1000) that we wouldn’t see as many as 35,000 cases of Covid in the United States (cases, not deaths). The terms of the bet reflected what was, in his estimation, the near certainty (1000 to 1) that he was right. Having already heard credible estimates that there could be 1 million deaths from Covid in the U.S. over the next 12-18 months (these estimates proved fairly accurate), I thought the terms of the bet ridiculous—and quite unfair to Elon. I offered to spot him two orders of magnitude: I was confident that we’d soon have 3.5 million cases of Covid in the U.S. Elon accused me of having lost my mind and insisted that we stick with a ceiling of 35,000.
Not saying he behaved well. Just saying I can wrap my head around it. I think he’s about 15% nuts now.
I feel like he's always been this 'nuts'. His seems to lean into situations where he disagrees with popular opinion when he can justify his perspective from first principles.
That's how he arrived at electric cars, tunnels, rockets, and most especially colonizing Mars - his highest priority. I think sometimes this makes him come off as weird, especially when what he regards as a 'first principle' is less directly tied to laws of physics.
He also reacts quite strongly when he encounters something that throws him off his stride. That didn't matter much when he had a smaller following and he was talking about getting rid of side view mirrors on cars, building cars from one huge casting, making a fully automated car factory with no humans, or removing buttons and knobs from the user interface. It's more pronounced now because his audience is bigger and the topics more far reaching.
Like you mentioned, he disagreed with his kid about the trans issue and suddenly started talking about the "woke mind virus". He got frustrated with censorship on his favorite app, he started taking about free speech and buying Twitter.
I believe what activated him politically during the Trump campaign was delays in rocket launch approvals for Starship. This was more than just an annoyance, it was a huge problem for his core mission. He perceived that the Biden administration was trying to punish him for weird tweets by holding up launches. But with a tight launch window for getting to Mars every 2 years, Musk doesn't have time to waste.
On a tight schedule, Musk decided to go all in on the opposition. This is why he kept saying the election was an existential decision point for humanity. In Musk's mind, if he doesn't get us to Mars, we'll never go, and if the government makes it impossible for him to get us there, we're a one-planet species and that means we're doomed.
Biden freezes Tesla out of showcases of American EV companies seems to have done more to radicalize Musk than anything to do with SpaceX.
https://www.cnbc.com/2021/08/05/buttigieg-not-sure-why-musks-tesla-snubbed-from-biden-clean-car-event.html
He did that years ago, but the only concrete consequence I'm aware of from that bruhaha was a bunch of mean tweets by Musk protesting the injustice of not getting invited to a pageant that's meaningless if Tesla's not there.
I'm not claiming the decision to delay rocket launches changed Musk's political orientation. I'm saying it was the threshold event that moved him to spend millions of dollars in an effort to change things. Musk is pretty open with what's on his mind. If you listen to his speeches during the campaign, he regularly talked about being at a tipping point for humanity, and about how bureaucratic procedures would soon make it impossible to get anything done. As an example, he would talk about having to do sonic boom tests on seals, and how hard it is to launch rockets.
You're right, though, that he's also dealing with government regulators in his other companies. So I'm sure there's some multi-causal element at play.
This is Sam Harris' account of what happened, and since I don't like Sam Harris, I give less weight to this being the pure unalloyed truth. Not that I'm saying he's lying, but everyone gives their version of what they think happened, which may not be the same as what really happened.
X: I gently reminded Y of the promise he made but Y got unreasonably angry and refused, what a miser!
Y: X was HOUNDING me about a dumb silly joke at a time when I was MASSIVELY stressed and I lost my temper, wouldn't you?
Isn't Harris the one who said no sin is too great to prevent the election of a candidate he doesn't like?
The candidate I don't like gets elected EVERY time, but I don't use that to justify tossing my whole ethical framework. Maybe Harris claims this is/was a one-time thing, but once you cross that Rubicon, it's tough to convince me that he's back to normal thinking again. (Or maybe he was always terrible.)
Anyone who justifies every lie, privation, and conspiracy to undermine democracy for a narrow political objective gets a permanent 100% discount on all future statements.
Revised Monty Hall, Secret Billionaire Goat, solved by exhausting all possible universes:
1/3 odds:
You picked car,
50% odds: Monty reveals bad goat -> Switch is "correct"
50% odds: Monty reveals good goat -> Don't switch is "correct"
1/3 odds:
You picked good goat, Monty ALWAYS reveals bad goat -> Don't switch is correct
1/3 odds:
You picked bad goat, Monty ALWAYS reveals good goat -> Switch is correct might as well get car
In total:
You had a 3/6 chance of being in a universe where the good goat is revealed. Not sure what you do here, probably just offer to take the good goat. If host doesn't allow this, regular Monty Hall rules apply, switch for better odds at car.
You had a 2/6 chance of being in picked good goat universe, where don't switch is correct
You had a 1/6 chance of being in picked car universe + bad goat reveal, where switch is correct
You don't switch!
On 17 - if you’ve played a fair bit of PoE2 and then watch Musk streaming the game, it becomes clear very quickly that he is not playing the game at a high skill level (neither knowledge nor execution). This makes the situation more interesting in my opinion, because he should have known how obvious this would be, yet he did it anyways.
Making blatantly false statements knowing full well that no one has standing to correct you is what is known in the business as a "flex".
I don’t find that persuasive. Aside from the fact that I think he cares deeply what others think about him (though he denies it), why wouldn’t he just tweet that the sky is purple or something? This seems like a very specific and strange way to go about flexing.
...Because thinking the sky is purple doesn't make you a superior human being.
Yeah, that's true, but I do find it funny. A lot of people are losing their minds over this, but I think it's "Elon Musk is bad at this game, git gud noob" is very relatable for those of us struggling with "I don't know what your problem is, I just breezed through Act 3 on Cruel setting and I never had any problems, you're just bad at this game" smuggery from others on the discussion forums.
https://steamcommunity.com/app/2694490/discussions/0/599641889544684086/
(I do see a lot of "he BOUGHT all his gear, he paid someone to farm it for him!" but in my own opinion, that's what the trading mechanism is POE is anyway, so it's a difference in degree not in kind. Paying someone to play it for you is, of course, different).
RMT/account sharing/boosting != in-game trading. One is a feature, the other is a bannable offense.
People who think they game is too hard have nothing in common with someone lying about being the 11th highest ranked player while someone else does the hard work of actually playing the game for them.
I totally agree that "git gud" smuggery is annoying and toxic, but it's completely unrelated to this.
I wonder, is he "doing a little trolling"?
If he's actually cracked at Diablo, I could see that. Though it's still an odd thing to troll about, especially now
Musk is one of the most prolific and flashy Trolls to ever Troll. He is also probably the absolute king of shit-posting. In many ways, he is 4Chan personified (or is that de-anonymized?) a smart person who blows off steam by acting like a cretinous idiot. When you look at him in this frame, pretty much all of his actions make sense.
Up until very recently, most people who acted this way in public would be very quickly persona non gratas in any polite company, but now it just so happens Musk is the richest person on earth and happens to control several of the most important and powerful corporate entities on earth, so he can't just be dumpstered. The whole "nazi salute" mess is the prime example of this new dynamic- the legacy media is doing it's absolute best to sink him with the claims, which despite their dubious nature would previously have sunk anyone not actually royalty, only to watch the ADL and the prime minister of Israel defend him, and for twatter users to generate 100x the engagement numbers of their original hit pieces with memes pointing out their hypocrisy by showing any number of videos of Dem politicos doing the same gesture.
The future belongs to the shitposters, and I for one am here for it.
I'm surprised at the fact that despite knowing how counterintuitive the original Monty Hall problem is, people are still approaching this new version without Bayes' Theorem. I mapped it out on paper, hopefully the symbols are self-explanatory and my handwriting is legible: https://imgur.com/a/1BeuxNe
I am so confused whenever someone says this. I first learned of this problem when I was a teen, and found it a clever little bit: an interesting and very intuitive twist on probabilities. This is not a brag, I just genuinely don't understand what's so unintuitive about it. I wonder if people are misunderstanding the conditions (i.e., not realizing the host will never open the door with the CSR behind it).
The common intuition goes:
A) initial state: I don't know where the car is.
B) opening the door adds no information (wrong), since I *still* don't know where the car is.
C) therefore, switching is equiprobable to not-switching.
The average joe isn't mapping out the tree of all possible outcomes, he's trying to track the physical location of the car by mutating a single world-state.
The A16Z comparison with the S&P 500 is very interesting! Though kind of funny to post it now through 2018 instead of 2024, I'd love the fuller figures through today and see if this stands up (though the S&P has continued to do so well, it wouldn't be that surprising).
I think this partially has to do with how strong the S&P 500 did over that period -- mid teens returns are something VC is probably pretty pleased with, but the S&P 500 is just way above most expectations for those time periods, even probably ca. 2009.
And the 2016 vintage fund won't have had time to have enough investments grow by 2018 to have a positive return, this is the classic so-called "J-curve." Nothing to see there.
However, I think it does get to the fact that a bunch of VCs are just totally mathematically illiterate (or claim to be) and say that the valuations they invest at don't matter, which they definitely do (I actually made an explainer about this a couple of years ago, alas): https://www.youtube.com/watch?v=B5V1Z5VwaVI
I don't think Marc falls into that category, but a lot of VCs do.
#6. It's been a long time since I've laughed so hard. As a thank you, here is a godawful addition to the list:
"She skimmed through the commentary , scrabbling greedily for actual first lines like a rabid raccoon not yet too far gone to feel hunger, screaming with laughter like an unfortunately ratchet soprano."
I was in the local equivalent of the DMV reading these and was literally crying laughing. So funny.
"“Waaaah” cried the infant as the very bad murderer murdered the ever loving shit out of its mother."
Well worth a binge through the entire archive, there is some gold in there.
John, surfing, said to his mother, surfing beside him, "how do you like surfing?"
"Ooh la la!" whispered Larry in French.
“Crime,” declared the police captain, “is everywhere, crime, crime!”
"I hope you like this book because I wrote the whole thing completely naked!" is my favourite from way back when.
Boo. :-)
OK, I couldn’t keep from
making some up:
Gondola grieved for each shrimp as he ate it and the sound of his own chewing seemed full of pathetic cries, but also some squeaky noises because chewing shrimp sounds that way sometimes, you know?
”I killed her in the prequel, I will kill her in the prequel, I may have just killed her” he murmured, backing away from her live body and stumbling over tenses, her dead body, and also a bunch of luggage and shit.
When she heard Rod’s car in the garage Mona looked for a quick place to stash the vibrator, regretting that it was time to stop cumming and cumming and cumming and cumming and cumming and cumming and cumming.
Re: Turkey's fertility rate. The demographic transition is definitely not only associated with Europe.
Iran had one of the fastest demographic transitions of any country. In 1979, the year of the Islamic Revolution, Iran's fertility rate was 6.5. It dropped below replacement in 2000. This is about as dramatic as China's one child policy.
https://ourworldindata.org/grapher/children-per-woman-un?tab=line&time=1950..latest&country=OWID_WRL~IRN~CHN~IND~USA~EGY~TUR
On Musk, Bezos, Zuck, and testosterone:
There was actually just a fun thread on the subreddit about these three: https://www.reddit.com/r/slatestarcodex/comments/1hz7qsi/whats_going_on_with_all_these_ceos_who/
Speaking as somebody who was a regionally competitive strength athlete and has spent more time in gyms than most nerds have spent reading (and thus have seen a LOT of people on testosterone / gear), I'll chime in.
Zuck: no testosterone, just good diet and exercise.
Reasoning: no mass, no popped traps, average "fit" BJJ body totally attainable with a reasonable amount of training and a good diet
Musk: he's very likely on TRT, and I wouldn't be surprised if he's blasted and cruised a few times (ie taken higher-than-TRT doses a couple of times), but I doubt he's routinely on supraphysiological doses.
Reasoning: he's had some jaw changes when older vs younger, and this can be a characteristic change driven by supraphysiological levels of testosterone. And we know he's probably on TRT because of his tweets. But in terms of the reductions in body fat and increase in fat-free-mass you would see with routine blasting, it's totally absent. He has zero muscle tone in his arms and torso, and he would have more if he was blasting, even if he wasn't training.
Bezos: very much on higher-than-TRT doses. He's got the characteristic popped traps, and pretty significant arm development / size accompanied by the barrel torso you see in guys on supraphysiological levels of gear who are bulking or who have shitty diets. Especially for his age, those are pretty strong tells.
Observe Bezos: https://imgur.com/PfY5Tuj
Also, for anyone who's a man over 40 here (which has got to be at least 2/3 of the audience), TRT is an absolutely magic life changer that makes you feel ~10 years younger. It is THE strongest "quality of life" intervention available to dudes 40+.
I have a substack post going over the benefits, risks, and possible side effects with a decent amount of rigor, here's a link:
https://performativebafflement.substack.com/p/the-case-for-testosterone
No way 2/3 of the audience is over 40!?
We'll know from the latest survey!
No, you're right, it's not even a majority, surprisingly:
https://imgur.com/a/V6pbPlf
Which I suppose is MY turn to be surprised, because it sure doesn't *feel* like a bunch of young blades in the commentariat. Of course, the commentariat may skew older than the survey answer-ers.
My intuition was that most of the readers would be between 30 and 40 based on me being early 30s and having found the blog towards the tail end of the SSC era. I'm likewise surprised at how many 20 year-olds there are according to that graph.
I think TRT would be a good topic for a signature Scott deep-dive
Your list of possible down sides is missing prostate cancer.
> Your list of possible down sides is missing prostate cancer.
When I looked into it, pretty much all of the known events were individual case studies, and it more or less fit with the other "pretty rare side effects likely down to biological variation."
This metanalysis essentially points to the same scarcity and lack of data:
Lenfant et al. *Testosterone replacement therapy (TRT) and prostate cancer: An updated systematic review with a focus on previous or active localized prostate cancer* (2020)
"Until more definitive data becomes available, clinicians wishing to treat their hypogonadal patients with localized CaP with TRT should inform them of the lack of evidence regarding the safety of long-term treatment for the risk of CaP progression. However, in patients without known CaP, the evidence seems sufficient to think that androgen therapy does not increase the risk of subsequent discovery of CaP."
https://hal.science/hal-03013944
My overall conclusion was that it's likely not a significant threat for most people considering TRT.
https://www.bulwer-lytton.com/
> 35: The NASDAQ has almost doubled in the past two years, so how come it doesn’t feel like we’re in an amazing tech boom?
doesn't it? I kinda feel like I'm living in an amazing tech boom.
We're certainly in an information tech boom, but outside of social media and search, have there been a lot of particularly notable gains? Not just regular gains, but denoting a "boom" and significantly different than previous eras?
You can say AI, but the practical applications haven't come in. AI is still very much a money-losing operation and will likely not pay off for several years, maybe longer.
There are B2B AI vendors who are printing money right now (Databricks, Palantir, etc.) because enterprise AI is valuable. It's the consumer-facing AI applications that haven't materialized.
I can say AI! I think it's amazing and at least tech-boom-adjacent that I can now pay a pittance and have access to extraordinary technology that didn't even exist five years ago.
I don't really want to quibble on how exactly to define "boom." my feeling is that there's both genuinely new cool stuff (largely around AI) and also I have the general feeling that I have easy access to insanely high quality, insanely cheap consumer tech stuff everywhere I look. It's cool! We don't have to call it a boom.
I feel like I'm living in a tech boom on the scale of a generation, but not on the scale of the past two years (ie if you compare to the dot com bubble of 2000, or the social media boom of ~2010)
Any country where the cost of seeking public office is multiples of the average annual national income should not in any sense be thought of as a democracy; at best, its an oligarchy where the opinion of the demos occasionally gets to act as a tiebreaker
something something it's a republic not a democracy
It doesn’t sound so clear to me? Getting yourself heard, your message out, your campaign publicized is never going to be cheap!
In my country, campaign expenses are capped and scrutinized, and if all’s in the clear and you didn’t do too badly, they are also reimbursed.
The US of course has rules about campaign finance, but I doubt they believe losers should be reimbursed at the taxpayer’s expense.
The cost of seeking local office isn't anywhere near that high. My lower middle class uncle was the (elected) head of public works for the rural town he lived in. My upper-middle class coworker used to be on the city commission for the suburban city he lived in. He had to switch to part time at my company in order to do both. But I am not sure if he actually had any significant pay cut, other than having to attend public meetings on some evenings.
Before the recent semi-implosion of political parties in the US, a major role of parties was to find successful local politicians who have promise and help them with the elections for the higher offices. That doesn't seem particularly awful to me.
34: "is there any health belief that foreign countries make fun of Americans for?"
Chiropracty? Wikipedia suggests it's primarily US-ian, though I don't know how aware foreigners are of it (or aware of how stupid it actually is).
Reminds me of a comment that a Chinese doctor made at a workshop on reproducibility: the concept that you should standardize medicine and as a doctor you should treat two different patients in the same way and expect comparable outcomes, that struck her as weird and a very Western concept. In Traditional Chinese Medicine, apparently you try to come up with an individual treatment for each patient.
I don't agree with her and doubt that TCM gives better results overall, but it did make me think.
It sounds like a good thing to say in order to sound clever, but is it actionable?
Well, yes it is.
She described it the following way, or that is how I understood it. Think of herbs, and how traditional Chinese doctors would use them. They have a large resort of herbs, and for all of them, they have some idea of what they do. Like some work against fever, some make you sleepy, and so on. When they see a patient, they would look at what symptoms that patient has, and brew a tea from some combination of some herbs. Even if you have only 20-30 herbs, you will probably never end up mixing the same tea twice. It's probably even more complex than that because you can use different dosages, and of course you have also other treatment than herbs.
She found it very alien to give the exactly same treatment to several patients and see how well it works on average. Which is our Western evidence-based approach. Because to her, that treatment would not be the mixture that a Traditional Chinese doctor would have recommend for any of the patients (or at most for one, but then all others would get different mixtures), so it seemed strange to her to expect this to work at all.
Some branches of Western medicine are trending that way, for example, biological cancer treatment. That is often tailored to a single patient. It is costly, though.
It's a thing throughout the Anglosphere. My cousin used to run a clinic in Scotland.
I guess the cottage industry of woke-adjacent health fads? For example, prominent US health authorities in the Summer of 2020 saying in unison that BLM protests are Ok cause "Fighting Racism is more important than COVID" while also screaming at much smaller conservative protests a month earlier. Or when they tried to organize Vaccine distribution based on "diversity" instead of vulnerability?
I will take my Eastern European grandma telling me to wear a warm sweater so as not to get "cold" over the entire medical establishment of my country going insane with some new Tumblr religion on a regular basis.
Not quite something people make fun of Americans for, but things that they believe are necessary more than other countries, and are of questionable benefit:
- Yearly general health checkups for healthy people.
- Flossing teeth (as opposed to just brushing).
The Finnish dentists I've visited tend to recommend against flossing... because they think that plastic toothpicks should be used instead.
Russian healthcare system promotes yearly general health checkups for healthy people (диспансеризация) as the best thing you can do for your health, since it lets you detect and treat any illnesses at an early stage.
> Max Tabarrok: AGI Will Not Make Labor Worthless. Teenagers’ labor isn’t worthless, even though adults are more skilled in every way and there are ~ten times more adults than teens.
His post doesn't mention teens at all, but instead "population growth, urbanization, transportation increases, and global communications technology".
17 reminded me of your On Priesthood article. Are gamers also a priesthood? You can be the richest man in the world, and spend a bunch of money to grind a HC character on your account to be top-15, but in the end, you have to actually play the game and if you don't play the game none of that matters and in fact actually makes you look worse than somebody who doesn't game at all. Speedrunning communities spend a lot of effort to track down the liars in their midst for ~0 benefit.
Is it just that the scientific/journalistic/etc priesthoods actually have cache, while the Gamer priesthood has similar norms but not enough weight to throw around? I haven't watched Musk's PoE2 gameplay nor do I play PoE2 nor have I watched the videos about his account, but I Trust The Experts that he is comically bad at PoE2.
I dunno, maybe I just didn't understand On Priesthood but the similarity feels striking.
...I don't even know what you're trying to say here? Yes, I'm sure everyone agrees that Musk is a terrible person, but why would he care about that? He already has all the power he needs.
He's specifically violating the norms of a community (Gamers) in an attempt to ingratiate himself to the community, but it doesn't work because they defend those norms very highly. It's like somebody coming into a scientific circle and committing academic fraud, no amount of money fixes that.
Yes, yes, he's very rich, but the very fact that he has lashed out at a man best known for using a dead rat as an alarm clock for criticizing him shows he does, in fact, care.
> He's specifically violating the norms of a community (Gamers) in an attempt to ingratiate himself to the community
But I don't think he's even trying to do that. He's just trying to brag that he's such a Gigachad that he can be the richest man in the world, take over a country, and still have time to effortlessly get the world record of his favorite game.
I mean, that's one reading. I think it's more likely that Elon Musk is extremely insecure based on all of his behavior and that he is not, in fact, some kind of uncaring super-robot because money makes you immune to having feelings.
Additionally, it is hard to imagine anybody who is not a gamer caring about your rank in PoE2. Maybe your wife would say "cool" because she's happy you succeeded at that thing you were working on.
No, I'm saying that he does care. It's just that he's also unfathomably narcissistic. This comment from the reddit thread put it best:
> He doesn't want to be one of us, since that reflects poorly on his ego. He has to be superior. His narcissism covers the rest of the gaps that highlight how totally obvious that not only is he not superior to the average person playing the game, but that he's significantly worse.
This was your initial post:
> Yes, I'm sure everyone agrees that Musk is a terrible person, but why would he care about that?
Narcissists are insecure. They need constant validation. The fact that he deluded himself enough to stream playing PoE2 is maladaptive behavior that comes from the narcissism. What he wanted was to show off his cool PoE2 character and have everybody oooh and aaah at how amazing he is. He is now mad that it is not happening and that people are instead saying he's comically bad at PoE2.
If I had done something this embarrassing I would probably just turn off notifications on the post, and try to focus on other things. Instead, Musk has decided to pick a fight with Asmongold, and lose, because he cares very deeply.
None of this really has anything to do with gamers-as-priesthood, though.
As someone with a few speedrun world records (Diablo II, not PoE), I can confirm the best you can hope for is an "oh that's nice, dear" from your wife. Or maybe that time my parents watched a recording of a charity event and said "wow, you really looked like you were having fun".
Pouring money into a really good indie studio for a game cycle or two would instantly forgive him in my mind
I think by priesthood I mean something separate from just skill, more like skill + induction into a prestigious inward-facing community with strong reputational effects. I don't think gamers have strong reputational effects - if you're not Elon Musk and therefore already famous, other gamers don't know what you're doing. I think Elon Musk's fame is a bigger part of this than gamers being a priesthood.
I mean, Karl Jobst runs a YouTube channel with over a million subscribers, and half the content (and the most popular content) is about catching people faking speedrunning, even in obscure games, and he is currently involved in a lawsuit with somebody who lied about his world records (Billy Mitchell) and sued him (and others) for pointing it out. There's a whole stereotype about calling people "fake gamers" for a reason.
Obviously, the only reason anyone outside the community cares is that Elon Musk is famous, but internally the reason they care is that being a fake gamer is bad. If Elon Musk was a mid-level streamer who tried to pass off an account he paid somebody to grind on as his own, it would also sink his rep within the gaming community if it came to light. There was a huge internal drama about Dream manipulating RNG in a Minecraft speedrun a while back, too.
I'd say nah. Because priests are high-status, even outside the priesthood. Commoners pay deference to priests, and look to them for expertise.
Gamers are certainly a "community", in the sense that they have their own social-norms and internal hierarchy. But the average joe doesn't treat their friendly neighborhood gamer with deference or seek their expertise, as they might for a doctor/professor/lawyer/etc. In contrast, when was the last time you consulted your local gosu about build-orders?
(Eh... ... ... maybe South Korea's an exception.)
The Karl Jobst thing, I think can be explained away with generic "schadenfreude, and contempt for dishonesty".
Re:6
The winner of the ChatGPT submissions was brutal:
"Nestled between a whimsical forest and a babbling brook, the hospice center stood like a quirky haven for souls on the cusp of their next adventure."
33: How would one design a version of this system which avoided this failure mode?
Just work with people whose problem is anger management, negativity towards other people or some such. Then if they agree to pay you're *sure* it worked.
34: Is there any health belief that foreign countries make fun of Americans for?
Yeah, the belief that burger and fries is a normal meal.
I've been to many countries and you can order a burger and fries in most of them.
Yes, but do they serve them in school or workplace cafeterias as a main or only option? Sure, people eat burgers everywhere, but in most places I've been (Europe mostly) they are thought of as a guilty pleasure or convenient quick snack, not as a staple part of a diet that you can serve to people on the regular and say you've provided them with meals.
Similarly, in the US whenever there is only one option of food available - museum restaurant, the only roadside cafe for miles, only cafe in a small town etc - it's almost always some kind of burgers or sandwiches, in other places it would be an exception, in my experience.
Not an American, but a burger seems like a perfectly reasonable meal to me (fries are sigificantly less healthy, though).
There are places (e.g. middle east and Balkans, based on what I know), where "ground meat with spices" is very common. Maybe called kofta, kebapche, gyros, losh kebab, doner kebab, chevapi etc. Tastier than hamburgers.
I never understood the burger and fries hate. It's a cheese and grilled ground beef sandwich, with cheese and other veggie toppings and sauces, with a side of fried potatoes. I think people just have associations with "fast food" and people really love being snobby about fast food.
Playing devil's advocate here, but burgers generally have a *lot* more meat in them relative to other components (especially the bread) compared to most other e.g. deli cut sandwiches.
My understanding is that Americans are, in general, fairly unusual in the quantity and frequency with which we consume meat, in no small part due to our heavily subsidized beef and poultry industries.
Well, you can see Togo's's menu here: https://togos.com/menu/
Every sandwich is listed with 1/4 lb. of meat except for the Triple Dip (1/2 lb.) and the sandwiches where the meat is bacon or tuna (no weight listed).
Subway, the international chain, doesn't advertise meat amounts on their menu. But... https://www.myheartliveshere.com/how-much-meat-goes-on-a-subway-sandwich/ says a Subway sandwich contains four ounces of meat.
By comparison, a McDonald's Quarter Pounder with Cheese is... you know what, I'm not even going to link it.
These are all exactly the same thing. Actually, the Quarter Pounder with Cheese has more calories than the deli-cut sandwiches do, so if anything there's _less_ meat relative to everything else.
Meat is good for you. You’re playing the angel’s advocate sir.
If you think burgers have a large proportion of meat relative to deli cut sandwiches, you ought to try a real corned beef sandwich from a real deli. Even if you have Swiss cheese and coleslaw on it, the proportion of meat is much larger than a burger.
Other deli sandwiches are similar, but with corned beef it seems more pronounced.
Interestingly, corned beef sandwiches were the main counterexample I thought of, since I have indeed had some with monstrous amounts heap on. However, I get the impression that those are intentionally sandwiches of excess, and most people would consider it odd to have one every day, unlike burgers.
I assume it's a class thing-- a burger and fries can be cheap and convenient. Only inferior people want them.
Fast food burgers can be low-priced and convenient, though I wouldn't call them "cheap" for what you get nowadays.
But a burger at a real sit-down restaurant runs $10, or $15 at more mid-range restaurants, and most aren't even particularly crave-worthy. Sure, they're acceptable, but nothing special.
Re 42 - to make the obvious argument, it's because the wokes have a point. Racism, sexism, etc, really were (maybe are?) too accepted in society, even if some people go too far.
As for timing, why now - because some events that made their argument more compelling. The fights over gay marriage that the left won decisively, the Bush administration being unpopular, and then trump saying on tape he groped women and winning a presidential election a month later.
Re 45 - think it's because of which side feels like it's winning/ascendant, which side feels like it's losing. Obama won in 2008 promising to be a compromiser; my memory of 2008 and 2012 is Republican primary debates were the candidates one-upping each other about who is more pure in their conservatism. When Obama won in 2012 it switched (again gay marriage played a big role, 2012 is when it won a popular referendum for the first time, and became clear it would never lose again). (of course this also relates to the above, not surprising wokeness took off on Obama's 2nd term).
Re 46 - I've heard that older buildings had fewer windows because of climate control (I think especially a lack of AC).
>Racism, sexism, etc, really were (maybe are?) too accepted in society
That doesn't answer the question of why wokeness became dominant at the specific time that it did, given that essentially everyone (including woke people) agree that Western society was vastly more racist, sexist etc. in the past than it is now.
I think there's much more to it (social media) but that actually supports the original point: further back in the past, there weren't enough opponents of racism/sexism to sustain a movement.
I think somewhere in the 90s-00s is when anti racism and anti sexism became mainstream--but crucially not so dominant that they had nothing left to criticize.
At notable points in American history, there were enough opponents of racism to fight a war, and there were enough opponents of sexism to get multiple constitutional amendments.
It's not the number of opponents, or apparently even having something to oppose.
> there were enough opponents of racism
They weren't against racism, they were against slavery. They still didn't see blacks as actual equals.
Honestly, most of them were just against disunion.
Yes, I'm being a bit facetious because the terms are so slippery and the movement have a tendency to be in search of something to criticize.
I mean, that's not really why the war was fought, and racism remained strong enough for Jim Crow to last 70 years after the war.
But sure, I'm being a bit deliberately vague about what "mainstream" and "anti racist" mean.
And I definitely think social media is important in changing how many people identify as anti-racist, how many anti-racists people perceive there to be, and how people define racism and anti racism.
But the basic argument I'm making is, if "liberalness on race/gender/identity issues" is normally distributed across the population, and if wokeness is the set of points past some threshold on whatever scale we're measuring this, then if the population mean shifts in that direction, you'll see way more woke people.
"Society was less racist in the past" is basically "the mean liberalness shifted in the liberal direction", so under this very simple model you'd expect to see more wokeness, not less, as society liberalizes.
Obviously that's overly simplistic, but I think it's a pretty reasonable first approximation.
Yeah, related to the vagueness I am being a bit facetious. We're dealing with such slippery terms, that mean something different to everyone that uses them and that shift depending on the context, so often constrained and gerrymandered to cover whatever is convenient at the time.
I think the normal distribution example is interesting and certainly reasonable enough, but (as part of the slipperiness of language) I would disagree that mean liberalness increasing necessarily increase wokeness, since wokeness tends to decrease liberalness. But that may be extending too far for a first approximation, and as we get into the second we notice that the model falls apart at the tails.
I mean "liberal" in the one-dimensional "pro-slavery, pro-Jim Crow, pro-traditional gender roles to pro-Black Panther, pro- Valerie Solanas axis" sense.
Like, just a one number summary of "how much do you construct your identity in opposition to anything that can be construed as 60s-era racism".
"That doesn't answer the question of why wokeness became dominant at the specific time"
This is what I talk about in the 2nd paragraph in my comment, which started with "As for timing, why now - because".
But none of them really seem particularly unique to the mid-2010s.
Trump was far from the first philandering or handsy President (JFK, LBJ among many others). Bill Clinton had been accused of sexual misconduct, sexual assault and rape on numerous occasions before, during and after his presidency.
Bush was far from the first unpopular Republican President. Reagan only had a 35% approval rating in 1983.
Gay marriage has only the most tangential connection to the racism and sexism you claim was the motivating force behind wokeness. If a massive legislative victory can prompt such a reaction, why didn't anything like this follow Roe v. Wade, which actually /does/ pertain to the racism and sexism you're talking about?
My guess is on interconnectedness. We never had social media that could amplify stuff happening in the world in a matter of hours, largely without any editorial board deciding what should or shouldn't be said. When most of the world came to you via the newspaper or the evening news on TV, it was a lot harder for the viral spread to catch on.
This sounds much less like "wokeness became dominant because it's obviously true and correct" and much more like "wokeness became dominant because of the rise of social media", which was a large part of Paul Graham's original thesis.
I mean, not necessarily. There is true information that is suppressed, or true information that is memetically viral but only in some communities, and total non-starters in others. Maybe in the '70s, a deep understanding of racism and its problems was commonplace in urban and academic areas, whereas in distant rural farms, the inferential gap would be too great to explain it to a farmer.
But now that social media is here, claims can Moloch and toxoplasma their way to ascendancy, and everyone feels the need to have an opinion, be it to defend themselves from others, or just because they feel a need to speak their minds.
Wokeness can be right or wrong, but like all ideas, it needs a good memetic environment to spread.
EDIT: forgot to say where the deep understanding would be commonplace in
The importance of gay marriage is less the size of its "connection" to racism and sexism, and more the speed and size of the turnaround in public opinion, and the size of the victory, not just legally but societally (and gay marriage is emblematic of larger turnarounds on issues around gay people). Abortion wasn't like that. I can't tell you that nothing has ever happened like that, but nothing recent comes to mind.
But just to make the connection, in case not clear to anyone, the woke millennials of 2020 were the same people being pro-gay marriage back in 2004/2008 when that was a pretty minority view.
Re Clinton - before my time, but I don't think anything like the Access Hollywood tape came out. Big difference between accusations and an actual taped admission of guilt.
Re unpopularity - not sure what Reagan is supposed to prove, he won 49 states the year after you cite. Bush was unpopular, never turned it around, and then his side lost big in 2008. This by itself isn't enough, but it was a contributing factor.
Paying out nearly a million in settlement fees isn't quite the same thing as a taped admission of guilt, but certainly the same ballpark.
Additionally, "wokeness became dominant because of how odious Trump is" seems ahistorical. The author of this blog was complaining about social justice (as it was then known) as early as 2013, the "Great Awokening" is usually stated to have taken place in 2014 (the year of Gamergate and the Ferguson riots) and many suspect Trump was elected in part because of widespread opposition to wokeness. The author of this blog even wrote an article arguing that, were Trump to be elected President, it would cause woke people to double down.
"His side lost big in 2008" - you mean a completely different candidate lost in 2008, because Bush had already reached his term limit. That's like arguing that Hillary losing in 2016 reflects poorly on Obama personally.
"Paying out nearly a million in settlement fees isn't quite the same thing as a taped admission of guilt, but certainly the same ballpark."
Definitely is not the same ballpark! Everyone understands people can settle lawsuits not because they're guilty but because they think it's better than going to trial; also, people will usually support their guy in ambiguous situations and so there's a big difference when they actually are on tape admitting it, at what point it's no longer ambiguous.
But also worth noting that part of the effects of wokeness is many Democrats souring on Bill Clinton.
"Additionally, "wokeness became dominant because of how odious Trump is" seems ahistorical. The author of this blog was complaining about social justice (as it was then known) as early as 2013"
Wokeness existing in 2013, "Great Awokening" being coined in 2014, and "wokeness becoming dominant" because of 2016 aren't in conflict.
But more importantly I didn't say it "became dominant because of trump", I said that was one event that made it more compelling (same with the arguments above re Clinton). I couldn't tell you exactly how much of the prominence of wokeness to attribute to various events, percenage-wise, just that all these things contributed.
"That's like arguing that Hillary losing in 2016 reflects poorly on Obama personally"
I think it's clearly true that a sitting president's popularity has a strong influence on their own party's candidate if it's a different person.
People don't focus on this in 2016 in part because it was so close, and in part because Hillary had a lot of baggage, but if people liked Obama more (or less), then Hillary would have likely won (or lost by more).
Do people dispute this with Biden in 2024? Clearly his unpopularity was a major factor.
Disagree about gay marriage being tangential to racism and sexism; I would guess that a lot of the early, "ideological" support for gay marriage came from people who explicitly analogized it to the civil rights movement, interracial marriage, etc.
Support for gay marriage was (among other things) a barometer for how many people supported the ideas of the civil rights movement strongly enough to extend the basic principles to a new contested domain.
The fact that gay marriage became mainstream so quickly I think shows a decent amount of "latent wokeness" in the population; I am not sure I totally understand what argument Jack is making, so I don't know if I agree with it, but I don't think it's crazy to argue that the success of gay marriage might have simultaneously convinced the proto-woke of the early 2010s that the ground was more fertile for their beliefs than it was, and of the essential rightness of their beliefs.
Again, not to dismiss that there were other important factors, or to imply that I totally believe this model, but I think there's something here worth exploring
According to the bugman (and I'm inclined to believe him): he particularly emphasizes that it's because the Puritans who absorbed Marxism during their childhood in the ~1960's gradually became tenured at Harvard. And since Harvard is a priesthood (there's a reason he calls the 4th Estate & Academia "The Cathedral"), Harvard's purity-spiral ethos naturally disseminated to wider society.
(This dovetails with some other theories I've heard elsewhere, the common theme being that it's not enough to look at the surface-level ideology. You also have to look at the cultural substrate in which the ideology is embedded.)
His latest post on GM directly grapples with PG's own musings on wokism, btw.
“Wokeness” isn’t that different than “Political Correctness” in the 90’s. It’s the same argument, just at a higher volume and after some significant victories.
It was hard to understand why Elon Musk would lie so much about video games, but the follow-up is even more baffling. How is it possible that that anyone could *own* social media platform and not understand that content creators have video editors, while having a conversation with a creator about bringing their content to X? (Elon decided to leak these DMs in retaliation for this creator talking about him in Path of Exile.)
https://x.com/elonmusk/status/1879798957301510341
I can't even imagine a headspace that would lead to this fundamental of a mistake, no matter how work-drunk or even drunk-drunk. What would even be an analogy in another industry for not understanding something like this?
I'm more confused on why he hasn't disabled community notes on his own posts yet. Obviously there's nothing stopping him from doing that, so it's strange that he's insane enough to keep digging this hole while also having the discretion to not go scorched earth on his critics.
Elon regularly brags about how nobody is exempt from Community Notes, not even him [0]. He's been repeating that mantra too long and would probably look really bad if he backed out now.
[0] https://x.com/elonmusk/status/1873862372101968352
This is so disgraceful of Musk. Even if that streamer *did* work for an editor of some sort, that doesn't begin to justify going out of your way to leak those DMs to hundreds of millions of followers! And this man expects me to link my bank account & send in my government IDs to X and use his Everything App?
Reading Graham's essay on PC and woke, I was surprised he didn't link to this classic Bloom County cartoon which captured the whole phenomenon perfectly back in 1988:
https://www.facebook.com/story.php?story_fbid=2537950212902383&id=100044325854853
I've been complaining about this for years. It's literally the same words in a different order.
I'm getting "Page Not Found" for the link (1/17/2025 9:43 EST). Is there an alternative location?
https://1.bp.blogspot.com/-MtqOA5zkL3g/XPH6S9JNjDI/AAAAAAADgAM/FyO6oHJP9HwACkmaXP9HjQXSqNVdrT9oACLcBGAs/s640/racism%2B%25281%2529.png
Different, likely original, wording: https://miro.medium.com/v2/resize:fit:720/format:webp/1*yJU5oQreBijumVoDcVL-Ww.png
Many Thanks! If the Woke insist on changing the currently-polite-term repeatedly, it would be nice if they would at least do it on a consistent schedule (say Jan 1st of the first year of each decade) and publicly announce what the new politically correct term _is_ . Ahh, the joys of language policing...
The point isn't to have people use any particular language. The point is to tell people to use something other than what they're using. An official announcement would defeat that purpose.
Many Thanks! So it is a variation on "Cruelty is the point." ?
In my understanding? Not so much cruelty. The point is that you give someone else a command and they obey you; it's to demonstrate that you're more powerful than they are and they have to do what you tell them.
Is the revised Monty Hall actually any different? In the normal Monty Hall, you have one success state and two failure states; the revised Monty Hall is the same.
Probabilities and statistics always get unintuitive when there's any constraints on distribution. I don't think ones where you can write out the whole probability tree are very difficult, and it's true that, just like the original problem, the correct move is twice as good as the incorrect move, but I wouldn't assume that prima facie - we're eliminating the outcome where Monty opens the door showing the goat you really wanted, so you just need to check the numbers fall out.
> I don't think ones where you can write out the whole probability tree are very difficult
I remember 15 years ago reading EY's Bayes theorem tutorial and he said that we probably got the problems wrong and this is why we need the theorem. No, I got them right because I wrote down all the possibilities in detail, I don't need no fancy theorem.
#14- Felix Hill, Scott's question about what dose of ketamine he took. I did a lot of reading on a Reddit sub about ketamine, one where everyone was taking it under the supervision of an MD. Those who were getting infusions at the doctor's office sounded like they were getting high doses. "K-holing" was considered an especially desirable reboot. I experimented with ketamine provided by a doctor friend, k-holed once, and it was one of the most unpleasant experiences of my life. It's sort of like being unconscious, except that there's still a little scrap of you awake and aware that you are completely out of it. The bad effects of that experience didn't linger though. I was fine the next day.
Sounds like Hill was tried on many somatic treatments once things really started to go downhilll. I wonder if he was ever tried on an MAOI.
> is there any health belief that foreign countries make fun of Americans for? (I’m not looking for conspiracy theories about vaccines
does any non-western-Europe country mock Americans for be too untrusting of American doctors?
"hmm yes I survived the collapse of the soviet union, but Americans should believe the vax are safe because the extremely rich capitalists say so and buy ads on their news networks"
Have there actually been ads saying vaccines are safe? I don't think I've ever seen any, and I would expect them to backfire.
In December 2020 CBS News claimed a $250 million dollar ad campaign to convince people the vaccines were safe: https://www.cbsnews.com/news/covid-vaccine-safety-250-million-dollar-marketing-campaign/
And in 2022, Kansas' health department pulled ads claiming they were safe based on complaints about the CDC VAERS data: https://www.cjonline.com/story/news/politics/2022/03/03/kansas-tv-ads-calling-covid-vaccines-safe-and-effective-pulled/9346720002/
https://www.youtube.com/watch?v=QAkQlZgnbUQ
and there are the psa's https://www.youtube.com/watch?v=pYJXrvLAxEg
tho given your standard of "new york times doesnt lie" they will just be telling you to get a vaxxine and implying everyone who takes it is fine
> I would expect them to backfire.
For a smaller part of the population then you would hope and it was problably one of the more effective propaganda campaigns; but I yell at the tv every time.
Re. 18 what's wrong with Elon Musk - the longer this goes on, the more I believe that Musk is a narcissist/ sociopath who has chosen the "autistic polymath genius" persona as his mask, and hates hates hates it when he's publically forced to admit that he was wrong, because it threatens both his ego and his disguise. He does have some undeniable talents - one one side, for buying promising companies and hiring talented people and screaming at them until they (sometimes) come up with something impressive; on the other side, for convincing people he's bloody brilliant (until they hear him talk about something they're knowledgeable about).
Drugs are pretty well established and he is just an absurd risk taker in that he made "fuck you money" at least a decade ago and has kept doubling down
Pointing out drug consumption and absurd risk-taking behavior is not exactly arguing against the narcissism/ sociopathy hypothesis, is it?
"shy librarian paradox" your suggesting two possiblitys where one fits the data
This doesn't explain a sudden personality shift around 2020 though.
Part of the personality shift may be Musk getting more brazen as the fact that he can get away with almost anything sinks in.
Part may just be a shift in public perception. Some people have been calling out Musk's deceptions and bullshitting long before 2020, but back then the façade was still intact, so few people were willing to listen. But the evidence has kept piling up.
Because nothing crazy or unprecedented happened in 2020. No sir.
Many people I considered to be very reasonable and well-adjusted underwent big personality shifts around then. It was a uniquely polarizing time, and I am very skeptical Sam Harris' side of the story is any more accurate than Musk's.