Please explain how the George Floyd protests have anything to do with Marxism, outside of the use of that word as a right-wing boogeyman. "The right-wingers are correct that Black Lives Matter is fundamentally Marxist in nature, except that's a good thing" is not a take that most BLM supporters would get behind. It just seems like you're 1. buying into the outlandish claims of far-rightists, even if you disagree with their values, and 2. trying to co-opt what's fundamentally a progressive liberal movement for your own ideology.
I'm not the person you put the question to, but I disagree with Marx's prediction that the middle class would vanish and the working class be pushed down, since it's the opposite of what happened. He beats Malthus in the competition for a prominent intellectual whose prediction was the opposite of what happened.
I also disagree with the labor theory of value, but Marx in _Capital_ concedes that, taken as a theory of exchange value, it is inconsistent with capital market equilibrium, so he disagrees with it too.
From each according to his ability, to each according to his needs - the creed of a slightly benevolent slave-owner. Marxism is a philosophy of everyone enslaved by society for the greater good, with the hope of a future where everyone behaves as if they are this kind of slave voluntarily, through some kind of conditioning.
I'm a socially progressive and fiscally center-left liberal/civil libertarian who broadly supports free-market capitalism, albeit more in the sense of European-style welfare capitalism than Reaganomics. Rather annoyingly, I'm very used to having my own stances dismissed as "Cultural Marxism" by people further right than me. (This is often more because of my social views than my fiscal ones; I've seen major corporations, right-libertarian market adherents, and even full-on anarcho-capitalists get called "Cultural Marxists" too, simply for being pro-BLM or pro-feminism or pro-LGBTQ). While this says more about the right-wing conservatives making those claims than it does about actual Marxists, it does incline me to distinguish myself from those actual Marxists as much as possible.
I'm also very used to getting berated by people further left than me, including many actual Marxists, for rejecting the idea of a revolution against the liberal capitalist system (which I consider to be extremely unlikely to happen, even less likely to succeed, and virtually certain to result in far worse outcomes than the status quo on the very slim chance that it is successful). Additionally, I often see Marxists defending regimes that I find absolutely deplorable (such as the Soviet Union or Maoist China or North Korea), or using false equivocations and specious logic to argue in favor of Communist Party dictatorships (typically something along the lines of: "all governments are fundamentally authoritarian and hierarchical, but at least Communist Party dictatorships are working towards freedom and equality for everyone in the long run, so they're better than all the other governments that only serve to uphold domination by the capitalist elite"). Furthermore, Marxism itself seems to be based around a sort of circular logic where any opposition to Marxism can be dismissed as either bourgeoisie deception rooted in naked self-interest, or "false consciousness" resulting from bourgeoisie propaganda. This is evident by how often I've heard Marxists condescendingly insist that I would agree with them if I actually read Marx, as if the tenets of Marxism are so obviously and irrefutably true that no one who truly understood them could genuinely disagree with them in good faith. (When I point out that I have a Master's degree in Political Science and have read The Communist Manifesto as well as Capital, they typically either accuse me of lying outright, make some vague accusation that I didn't fully comprehend the material, or move the goalposts by claiming that I need to read all of Marx's works before I can make a judgment.) All of these factors have left me with an extremely negative opinion of Marxists.
As for the problems I have with Marx's theories in their own right, that would require an even longer post. Though the fact that his predictions have almost entirely failed to occur (and that Marxists frequently need to creatively "re-interpret" what those predictions meant, in the same way as Christian apocalypse cultists or followers of Nostradamus) certainly plays a major role.
> it does incline me to distinguish myself from those actual Marxists as much as possible.
By doing so you are playing to the stereotype, since the very claim is that Marxist-style interpretations were applied to social issues, not that these people believe in Marxism. This ideology is especially attractive to well-off people, as it can be coupled with support for capitalism.
As per jstr below. Do you believe it would have been possible for Chauvin to get a fair trial? It was obvious that acquittal, or even conviction for manslaughter, which was the most plausible charge, would result in riots, quite likely that it would result in the jurors being identified, demonized, very likely assaulted, possibly killed.
If you were on the jury and believed the proper verdict was acquittal or manslaughter, do you think you would have held out for it, producing, and being blamed for producing, a hung jury?
Jury intimidation. The fact that it had been made clear, pretty explicitly by one congresswoman, that an acquittal would lead to riots, and was likely that even a manslaughter conviction, which there was some basis for, would.
"It was obvious that acquittal, or even conviction for manslaughter, which was the most plausible charge, would result in riots, quite likely that it would result in the jurors being identified, demonized, very likely assaulted, possibly killed."
What makes this 'obvious'? This alleged fear of angry black people and their left-wing allies has not led to similar results in the Michael Brown and Eric Garner cases, which didn't even make it to indictment. Pretty much true for Breonna Taylor, too, the 'wanton endangerment' indictment was practically dark humour.
Also, none of the jurors in the germinal case of Trayvon Martin, to the best of my knowledge, have since been assaulted or killed. Granted, that wasn't a police affair, but it centered around race and presumably deadly assailants with axes to grind against the system would not have been too picky.
I'm very surprised by the claim that Floyd wasn't murdered and I'd like to know why you believe that.
As you know, Chauvin was convicted of murder by a jury which determined that his actions towards Floyd fit the legal definition of murder. Do you think the legal definition of murder is wrong? Or do you think that the jury was mistaken? If the former, which of the legal elements of murder do you think are incorrect? If the latter, which elements did Chauvin's conduct fail to satisfy?
But, the DA's office saw all those things and chose to press charges, and the jurors saw all those things and chose to convict - unanimously - and the judge saw all those things and decided that as a matter of law they did not preclude a verdict of guilty on the charge of murder in the second degree or murder in the third degree. It seems everyone involved in this process - experts and non-experts alike - saw all the evidence that you claim exonerates Chauvin. That means you are asking me to take it at your word that if I watch the two half-hour police cam videos and read the autopsy report and the medical examiner's trial testimony, I will conclude that Floyd wasn't murdered. But it's your word against at least fourteen other people's words, all of whom are more familiar with the case than you.
So unless you can give me the specific element of murder that you feel was missing from Chauvin's actions, and the specific fact or facts that you feel demonstrate that this element was missing, I am absolutely not willing to take you at your word. "Just trust me, it's in the evidence" is not a legal argument. You are the one who seems to think that it is very important that people believe Floyd was not murdered, and yet the only thing you are willing to do to convince us is weave a vague and unsupported conspiracy theory in which BLM and the media somehow corrupted the jury. I think you'd convince a lot more people here in this particular comments section of your point of view if you could articulate an actual legal argument as to why you believe that Floyd was not murdered.
Re: 2: this is a video of a doctor explaining how Chauvin's actions caused Floyd's death. It completely contradicts your point.
Re: 3: you are saying that Chauvin was clearly told that Floyd was in respiratory distress before he applied an unlawful restraint. Again, this completely contradicts your point.
"Prosecutors didn’t have to prove Chauvin’s restraint was the sole cause of Floyd’s death, only that his conduct was a 'substantial causal factor.'"
The video which you linked is the explanation of why Chauvin's restraint, and the injuries and pain it caused, constituted a substantial causal factor. That's why I say it contradicts your point.
"WHAT’S SECOND-DEGREE UNINTENTIONAL MURDER?
It’s also called felony murder. To prove this count, prosecutors had to show that Chauvin killed Floyd while committing or trying to commit a felony — in this case, third-degree assault. They didn’t have to prove Chauvin intended to kill Floyd, only that he intended to apply unlawful force that caused bodily harm.
Prosecutors called several medical experts who testified that Floyd died from a lack of oxygen because of the way he was restrained. A use of force expert also said it was unreasonable to hold Floyd in the prone position for 9 minutes, 29 seconds, handcuffed and face-down."
The reason why the use of force expert is relevant is because Chauvin's use of force was only unlawful because it went against police training and because a reasonable police officer would not have used such force in the course of their duty. Generally police officers have a lot of latitude in determining appropriate levels of force, so it is extraordinary for a jury to find that a police officer acted unreasonably. That's why the copious video evidence and testimony from multiple experts were needed to prove that the use of force was unreasonable, and therefore unlawful.
There's more in the article - about the third-degree murder and manslaughter counts, plus loads of other links to individual pieces of testimony and explainers about their role in the trial and other implications.
I would recommend, before you continue calling people on here ignorant and lazy and ranting about how Scott and all of his readers have been hoodwinked by Marxist propaganda, that you acquire a single modicum of understanding of what the actual law is that applies to this case.
In particular, if you are going to argue that the jury was mistaken, you should be talking about the legal definition of murder in Minnesota and whether or not Chauvin's actions meet that definition. Because your presenting a set of facts which are completely in line with and clearly demonstrate the legal definition of murder and saying "see! no murder! you ignorant fools!" is neither a valid nor convincing argument that Floyd was not murdered.
The same question I put to dionysus above. Acquittal, even conviction for manslaughter, would have led to rioting. Do you disagree?
Voting for acquittal or manslaughter, especially if it led to a hung jury, would probably have resulted in the name or names responsible being leaked and those jurors and their families being targeted by massive hostility, possibly physical assault. Do you disagree?
If you agree with both propositions, do you think that if you were a juror who believed he was innocent or guilty only of manslaughter, you have voted that way?
Oh, are we just responding to each other's questions with questions? Do you think this is a good way of making progress towards the truth? What assurances do I have that if I answer your questions, you will answer my questions?
Hypothetical insinuations about the motivations of jurors are not even close to an argument that Chauvin didn't commit murder. One wonders why you are dodging the simple, direct question of which element of murder you think was absent from Chauvin's actions.
I recall a previous argument in which you took me to task for assuming that the authors of the Great Barrington Declaration knew that their proposals would have led to mass preventable mortality. But here you are pretending to know the motivations of 12 independent jurors based on your personal assessment of the likely consequences of their votes? I feel like you can't have this both ways.
So I'll answer your questions, why not.
"Acquittal, even conviction for manslaughter, would have led to rioting. Do you disagree?"
I believe based on evidence from most other protests of police acquittals that protests would have been mostly peaceful and smaller in scale than the protests of the murder itself. I also believe that there's an important distinction between rioting and vandalism. I'd say if there was rioting, 95% chance it would have been less severe than the 2018 Super Bowl riots in Philly. But more importantly, I am not morally responsible for someone's reaction to decisions if I do something legal and they choose to commit a crime, so while the causality here might exist, I do not believe it would impose an ethical duty.
"Voting for acquittal or manslaughter, especially if it led to a hung jury, would probably have resulted in the name or names responsible being leaked and those jurors and their families being targeted by massive hostility, possibly physical assault. Do you disagree?"
Yes, I disagree with the leak hypothesis, and I disagree that jurors who voted to acquit and their families would be physically assaulted. I don't know where you are getting the idea that BLM is like the mafia. Can you give a single example of a case where a cop was acquitted for murdering a black person and the jurors and their families became targets of violence? Like literally, where are you getting this stuff?
"If you agree with both propositions, do you think that if you were a juror who believed he was innocent or guilty only of manslaughter, you have voted that way?"
I would not vote to put an innocent man in jail in order to prevent riots or avoid having hostility directed at me or my family. That's crazy.
Haha 1 is very true. The beginning of Thinking Fast and Slow was full of gotchas, those little thought experiments where the reader submits to being tricked so the authors can prove a point. I skipped them all.
What about behavioral economics is friendly? Or unfriendly, for that matter? I didn't see anything about it being for anyone's "own good".
Like all sciences, the principles investigated by behavioral economics aren't good or bad. They're just there. Whether those principles are used to get you to tip your Uber driver or to encourage you to get vaccinated, you can't use the applications of a science as a condemnation of the science itself.
The fact is, we're manipulated by our minds every moment of every day. Behavioral economics is just investigating how and why. You can't hold that against it any more than you can fault biomedical science for enabling cosmetic surgery. If you're so averse to being manipulated, you should be studying more behavioral economics so you can learn how to avoid the cognitive biases in question.
I mean I’d absolutely condemn the scientific sub field of “kidnapping people or buying access to people in jail and forcibly administering them drugs as an experiment”, which has been done.
And behavior economics is “investigating” how and why in the same way that international relations professors and think tanks are just “investigating” - no they are not they are creating and implementing ideas too.
And cognitive biases are not real, believing they are real is useless, just don’t buy the thing being marketed!
He isn't describing behavioral economics. He is describing "libertarian paternalism," the policy advocated in _Nudges_ and based on behavioral economics. A natural enough confusion.
It is true that the "Nobel Memorial Prize in Economic Sciences" is officially the "Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel" and was not created or funded in the same way as the other prizes. However, as a claim that this somehow discredits the award is doesn't much make sense to me. The prize is still administered by the same Nobel Foundation using the same nomination and selection process. The prestige of the award comes from the process by which they select the greatest achievers in each area, not based on which rich person initially donated the funds for it.
> The prestige of the award comes from the process by which they select the greatest achievers in each area, not based on which rich person initially donated the funds for it.
I have a bridge you might be interested in. I believe the vast majority of the prestige comes from the name "Nobel", associated with Nobel laureates that achieved demonstrably great scientific discoveries like all of the modern physics and chemistry, secondly from the royal pomposity.
If it was only the selection criteria which makes it prestigious, why nobody calls it the Swedish Riksbank Prize? Because then it would one of the field-specific academic prizes much fewer people outside the respective field have heard about, like Fields medal or Abel prize. For same reason, Nobel Peace Prize is still a big deal despite recent recipients' usually lackluster achievements in promoting fraternity between nations or organizing peace congresses or abolition of standing armies.
Note that what you wrote is not true of the Nobel Peace Prize, which was funded by Alfred Nobel but is awarded by a committee appointed by the Norwegian parliament.
I disagree slightly. I would modify this: nudges and behavioral economics and such often are attempts to manipulate people to give up their money and do harmful actions along with giving up the money - and not enough people are noticing and avoiding it. For instance, don’t buy things that you see in ads! Just don’t! A good general rule. Especially something like clothing or shoes - the manufacturing cost is like a tenth of the marketing budget so you’re literally just paying to see ads. And the food you see advertised will kill you a month earlier and make you subtlety feel like shit. And all of those companies use all sorts of behavioral Econ tricks and psychological whatevers, whether or not they “work”. And that compensates for the product being bad. I wish advertising didnt work...
About manipulation, I do think some of this "science" or science (depending on your view) has been harnessed in gross ways to serve advertising and commercialism.
On the other side though, anything that is planned or designed in an intentional way to serve groups of people, has this element of "nudge" or goal-orientedness built into it.
Parking, transportation, building, and office space design as well as zoning generally all have behavioral nudges built into them. Employer-provided health insurance plans, smart phones, computer operating systems, automobiles, drug store register displays, classrooms, the DMV, and city parks. Those digital speed limit signs are a behavioral nudge, as are old-fashioned regular speed limit signs for that matter.
There's no human-designed system free from built-in bias it wants us to conform to. It's all optimizing for something. The more we disagree with what it's optimizing for, the more I think we're likely to resent feeling nudged by it. Come to think of it, perhaps that's part of what so many people find relieving about being in the woods or in other nature, that sense that this system has no intentional designs on us. It's there for itself and its own ends and so gives us some relief from our own human-centered egos.
For those who are too lazy to click the link: the quote in the post elides "In particular, the
asymmetry is one only of the magnitude of the exponent on the same side of unity" and is immediately followed by "What is not immediately obvious is the profound effect that a change in exponent of this magnitude produces".
On George Floyd and the Identifiable Victim Effect: I think a huge amount of the difference was external pressure, specifically the effects of COVID-19 lockdowns, and simmering discontent over Breonna Taylor's death, which shortly preceded US lockdowns.
Some of it is also bias in the people who are inclined to protest. There are notable protests against something less than a tenth of a percent of all murders in the US. Vilfredo Pareto is rolling in his grave.
Well, and not all Identifiable Victims are equal. If you put a picture of Betty White age 90 on a poster talking about limits to access to birth control for poor women, I bet nobody will give a shit. You need to put some ingenue that looks like everyone's teenage daughter. It's always a possibility that the experimenters were incompetent at designing their Identifiable Victim poster -- it is, after all, an art, and advertisers pay big bux to talented people who can do it effectively.
Good point. Also that experimental design graphically is person versus map, not person versus group if people. I can see why it’s difficult to indicate “group of people” graphically while preventing the viewer from locking onto one of the people. So they used an abstraction. But map hits all the nationalism/country buttons and I’m not sure it hits very many group of people buttons. They may have inadvertently studied single mom versus patriotism.
The police weren't claiming to have intentionally killed him, any more than they intentionally killed Eric Garner when he asphyxiated. But enough kids have been shot by police to rank well above him.
I agree with your point here, but want to point out that in a legal sense, "intention" has a nuanced meaning which includes the concept of what a reasonable person could foresee. If a reasonable person could foresee that Chauvin's actions had a reasonable chance of killing Floyd, then Chauvin's choosing to do the actions constitutes intent.
As to the question: I don't think the outrage in the Floyd case was about who he was, but about the extraordinary way in which he was killed, over a long period of time, with him and surrounding civilians literally begging the cops to stop killing him. So I don't think it was the identifiable victim bias at all, but the raw horror of that particular video.
I agree that Identifiable Victim has little to do with it: Dennis Tuttle and Rhogena Nicholas were identifiable victims of far worse police misconduct (made-up controlled drug purchases to get a warrant), yet the public response was basically nonexistent.
He was a large, strong guy and was resisting arrest at the time, but your statement also seems to be implying that the killing was a deliberate planned act. That's not what I saw on the video. The guy may not even have died from the hold he was in. IIRC he was also yelling "I can't breathe" at the top of his voice whilst standing up next to the cop car, so his breathing difficulties seem to have been somewhat independent of the neck restraint. And he was supposedly on drugs at the time that can cause difficulty breathing (or at least the perception of it).
That plus his criminal history and all in all, it's very unlikely anything to do with Floyd himself led to the explosion of interest in the case. Seems more like it didn't matter who he was or what happened, and it was more the time and context that mattered.
I did read something arguing that people on drugs who act like Floyd did can suddenly and unexpectedly become very active and violent and a threat to themselves at least.
Also presumably there is potential bias in how important the event is perceived to be. You could view George Floyd as the culmination of multiple related news item for instance. Also like the saliency of a full length video probably was important. I suspect you'd see a smaller difference if you played the floyd video vs a supercut of fairly unjustifiable police killings.
I think there are thresholds, whether local (Ferguson/Brown) or national (George Floyd). And, on the national level, I think George Floyd death being so obviously wrong yet filmed was a big reason for its impact.
We felt it even in Europe and it's not like we didn't know your police kill unarmed black people in an over the odds way (and our own police is also heavy handed when dealing with our own disliked minorities).
I don't want to relitigate the whole thing but standing on the neck of someone who repeatedly tell you he cannot breathe and is no longer presenting a threat (if he ever did) is a-okay SOP for police? Are you sure?
Why would you present it as "standing on the neck"? Surely you know that's an outright lie, doesn't that undermine your position?
He had one knee on the back and one knee on the side of the neck. Hard to tell how much pressure he was exerting since his feet were on the ground, but some people were noting that Floyd's head was still moving and turning and other people reenacted the position and had no issues.
I don't think it's obviously wrong to be in that position exerting light pressure to control someone on drugs until the paramedics arrive. In my mind the obvious error was in not checking his breathing, pulse, etc. when concerns were raised about that.
Alexander Kueng did check for a pulse, actually. Neither he, nor Chauvin could find it. Thereafter, Chauvin knelt on Floyd for upwards of two additional minutes.
"On drugs" isn't a carte blanche. Floyd, handcuffed and half-asphyxiated, was to any reasonable observer clearly no threat to the four cops around him. And in any case, if Chauvin had actually been following standard police procedure (which the MPD police chief certainly went to great lengths to deny) that arguably constitutes firmer grounds for public outrage than the idea that he was a rogue racist cop.
The cop in question was convicted of murder, and given just how hard it is to convict a cop of anything, I would think that's a verdict we can probably accept.
>your police kill unarmed black people in an over the odds way
Not "over the odds" — a bit under, if anything. This is a media effect. The old SSC itself has an essay on this, and newer data since is even more convincing: police killings of black people are basically as expected given police deaths at the hands of black people, black officers shoot more readily than white officers, etc.
So I knew I wasn't being careful enough when writing that. The general feel I get from the statistics is that the black community is treated as "all criminals, some just haven't been convicted yet".
So if the police goes full blast all the time every time they encounter a black person, then yes, I think we can have the statistics we have and the feeling the black community has, which is (afaict, I'm not black etc) they don't have a police force, they are dealing with an occupying force.
I think traffic stops data (being a lot more common than unarmed shooting) is in a way better to understand that police do not treat black citizens with nearly the same leniency and tact accorded white people.
Note that even the below research is not full proof and the authors recognize it.
I don't think the black community has that feeling in general. If I recall, in polls blacks tend to want somewhat more policing, because they are disproportionately the victims of crime.
Obviously there is a confounder here which is that American blacks commit crimes at a much higher level than other races. Anytime there is a discussion of black-white, I want to also see white-Asian (Asians committing much less crime than whites), men-women (men committing much more crime than women), blacks-immigrant blacks (many immigrant Africans are more successful than whites, not sure about criminality).
Again, I'm not black, not even American but my firm impression from being very interested in America is that the black community wanted *more police if it was *better police i.e. they recognize they're more likely to be victims, thus want protection but are unhappy when their nominal protectors turn out to be yet another issue they have to deal with.
It's not so much the volume of interactions Black Americans have with the police that troubles them or differentiates them from other racial groups, but rather the quality of those interactions.
Most Black Americans want the police to spend at least as much time in their area as they currently do, indicating that they value the need for the service that police provide. However, that exposure comes with more trepidation for Black than White or Hispanic Americans about what they might experience in a police encounter.
pretty sure that police treat asians, women etc. better than blacks (yeah i know, have no hard stats to offer) but it's not really relevant because while the issue is being couched as institutional discrimination / police being racist pigs, it's IMO much more likely that the real problem is police having prejudices / stereotypes about different people (which as often w/ stereotypes may even be grounded in reality at population level) but that then negatively impacts individuals in the black population that are actually total innocent (like black harvard professor who got acosted by police on steps of own home because mistaken for a burglar, etc.) So police probably have same positive non-criminal stereotypes about asians as you do, but their resulting courteous treatment of that community doesn't say anything about how badly they may be behaving towards another.
That assumes we have accurate stats. The report on George Floyd initially said something like "died of a medical crisis during arrest". No camera, and it's not only not a crime, it's not even included in that stats for police killings.
The video mattered a great deal, yeah. It was unedited and free of commentary, and it narrowed down the room for equivocation on whether Floyd was a threat, which the police, and people who side with the police in these cases, would normally automatically claim.
As a result, it won black people a lot of non-black allies in this particular case, which was the main reason for the volume of interest and media exposure, and the feedback loop of anger.
Exactly - I think what Scott is missing here is that there wouldn't have been protests because of George Floyd if people hadn't ALSO been aware of the statistics (and other cases) saying that this kind of thing happens a lot (or anyway far too often). As with everything that has to do with communicating with lots of (different) people: it's easier to have a simple slogan/a posterhead for something than to always bring a complex message, which for me explains the George Floyd murals.
Also the video being painful and widely present on social media was a big contributor. Also a lot of activists who really wanted such a case to spread it!
COVID may have modulated the response up or down a bit, but I think you're under-selling the impact of the video of Floyd's death. It was powerful, it was tangible, and it was unequivocal.
A man being calmly choked to death for no great reason by police while the public stands around and tells them not to, while recording it on camera, was a novel experience. If it happens again I'm sure the protests will be smaller.
There's another effect you didn't mention, too. You mentioned how people generally refuse an offer where someone offers to flip a coin and you get $60 if heads and lose $40 if tails.
If a stranger offered me that deal, I'd very reasonably assume that the coin was weighted to be very likely to land tails, or one of those coins with tails on both sides or something.
I assume there's a slight bias toward this kind of thing that people have learned from untrustworthy people. "What? You want to give me something that you say is equal value? Now I suspect that what you offer is worse or what I have is better than I think, so I'll keep what I have".
Scientific studies on results like those don't actually offer people that bet, though. They don't stop people on the street and ask them to flip a coin for money, they ask them to participate in a study to investigate their decision making, done by x researcher at y university and approved by an ethics committee. In reality people might think "this guy is trying to scam me" when asked if they would take a bet, but that explanation doesn't hold for scientific studies.
How DO studies like this work? I remember reading Thinking Fast and Slow and they talked about how when they worded easy questions in tricky ways, students in their studies would get them wrong. Well, sure -- the students have no incentive to try, right?
They don't, but social pressure, especially coming from an authority figure in a science lab, is a very powerful force. People will usually try because it's expected of them.
Example methodology from one of the experiments cited by Gal-Rucker (as far as I know this working paper was unfortunately never published, this is from a pre-preprint draft I received but I found it much better than anything they ended up actually publishing):
Method:
Participants, 417 individuals, were recruited through Amazon Mechanical Turk, and randomly assigned to WTA, WTP-Retain, or WTP-Obtain conditions. Scenarios varied by condition as followed:
WTP-Obtain:
Imagine you can buy the elegant stainless steel travel mug pictured below. It has a retail price of $25.
[Picture of Mug].
What is the most you would be willing to pay to buy the mug?
WTP-Retain:
Imagine you own the elegant stainless steel travel mug pictured below and that it is in new and unused condition. It has a retail price of $25.
[Picture of Mug]
Imagine you accidentally left it behind at a hotel. You receive a call from the clerk at the hotel stating that he can ship it directly back to you if you are willing to pay the costs. The standard shipping rate is typically about $25.
What is the most you would be willing to pay to keep from losing your mug?
WTA:
Imagine you own the elegant stainless steel travel mug pictured below. It is in new and unused condition and in the original packaging. It has a retail price of $25.
[Picture of Mug]
What is the least amount of money you would accept to sell the mug?
Afterwards, participants in the WTP-Obtain and WTP-Retain conditions whether they thought of their decision in terms of paying to prevent the loss of the mug or in terms of paying to gain the mug. Specifically, participants were asked, “how did you think of your decision,” with the two options being, “I was thinking of what I would be willing to pay to keep from losing my mug,” and “I was thinking of what I would be willing to pay to gain a mug.”
"Imagine you can buy the elegant stainless steel travel mug pictured below. It has a retail price of $25.
What is the most you would be willing to pay to buy the mug?"
Nothing. I don't use travel mugs and I distrust advertising bumpf like "elegant" and "stainless steel" in the same sentence. I'd need to see the picture of the mug to be convinced it was anything other than "functional piece like a hundred others with nothing distinguishing about it", and even then I'd be "I don't need or want one, so I'm paying nothing".
As for questions (2) and (3), unless the mug was something of sentimental value - my dear old granny bought it for me for my birthday before she sadly passed away, for instance - then if it's going to cost me as much to have it posted back to me as buying a new mug, the hotel can keep it and I'm buying a new mug.
For (3), plainly the minimum sale price would be $25 plus whatever postage and packing costs I have to pay to send off this mug; if I can find someone willing to pay more, that's a bonus. If I can find a mug (ha!) willing to pay me $100 for a $25 mug, then that's what I'm asking.
I think you are hitting the nail on the head here. I have often thought that behavioral economists have a flawed theory of mind behind a lot of their findings, in so far as they assume people think exactly like them (or should) and then run the numbers to see what deviates from themselves.
Especially on the selling side; how many people ever liquidate lots of stuff they have but don't like that much anymore? The annoyance and irritation of going through the transaction, not to mention the hidden issue of "maybe I will want that later and have to reacquire one", makes sitting on your stuff a lot more rational than assuming instant transactions accounts for.
I wouldn't pay $25 for a travel mug, that sounds too expensive, and I'm (presumably) many times richer than the average Mechanical Turker, for whom $25 might be an entire day's work.
If you ask me about buying the mug, then I'll answer as me. But if you ask me about retaining the mug, then I'll have to answer as a hypothetical version of me who actually liked that mug enough to pay $25 for it in the first place. Obviously, hypothetical mug-owning me is going to value the mug more than real me, as demonstrated by the fact that he already owns one and I don't.
I had a Williams and Sonoma Cantine soup bowl that I liked a lot. I didn't pay very much for it-- under $15, I think.
I broke it.
There was one available on ebay for $50. After some wavering, I bought it.
I happened to have a shard of the first bowl, so I could tell they were the same, but somehow, I didn't like the new bowl as much. The fancy crackled (?) glaze on the inside wasn't as entrancing.
Yeah, that's a decision based on more than merely "what was the original price of this item?" which is, I suppose, the whole point of irrational economic choices. I don't care about the hypothetical travel mug in the study mentioned above, so I wouldn't pay to have it posted off to me or need to be paid extra to sell it (I'd happily take extra if anyone offered, but if they offer me $25 for a mug that costs $25, okay fine).
It would be different if it was something I liked and valued and wanted a replacement for that was the same, not a new thing.
On the other hand, I bought a fairly cheap cast-iron trivet for my teapot, and I am very happy with it, and I get pleasure out of it every time I look at it. So if I lost it or needed a replacement, I'd probably be willing to pay that bit extra.
If you knew ahead of time exactly what joy you would get from the soup bowl, would you have been willing to pay $50 for it the first time?
If so, this might be a real-world example of your demand curve being identical at two points in time, with a willingness to pay which exceeded the market prices at both t1 and t2.
I participated in similar studies in undergrad and in my experience they really do offer the bet. Typically the format was something like "you start with 1000 tokens to play with, at the end we'll give you $1/100 tokens" and then they actually have you play games with the tokens. Dunno how that specific study worked, but actually offering the bet is common, and in my experience it was pretty obvious that they would actually pay what they promised.
In favor of recovering Andrew Vlahos's point, (a) Once learned, heuristics carry over even when inappropriate. If people are behaving in a way that's rational in the real world, but irrational in a lab setting where the odds happen to be trustworthy, should we really emphasize their irrationality? (b) How do subjects know they can trust the experimenters? Psychologists routinely lie to subjects. It so happens that on the other hand, experimental economists have a strict code of *not* lying, but can the subjects always make such a fine distinction?
"In reality people might think "this guy is trying to scam me" when asked if they would take a bet, but that explanation doesn't hold for scientific studies."
am nitpicking here a bit but given that majority of these studies seem to be overtly telling participants they are studying effect X while secretly they are trying to detect an undisclosed effect Y, not too sure i'd be all that trusting of what study organizers tell me. I say nitpick because don't think majority of study participants would be sufficiently aware to think like that.
Then you run the risk that the person themselves is carrying an unfair coin. The best way is just let them flip the coin as much as they want to convince themselves it's fair, and then decide whether to take the bet.
Random test subjects often carry coins, but seldom carry weighted coins.
(And if I remember right, it's actually physically almost impossible to make weighted coins.)
In any case, there are even techniques for getting unweighted random bits out of weighted coin tosses, as long as they are independent (and identically distributed). But your average person might not understand them well enough to trust them.
This was my thinking as well. Virtually no one intentionally carries a coin that's biased. If a stranger approaches you on the street the likelihood you will be able to take advantage with a weighted coin is very low.
If someone offers you a seemingly advantageous bet, it's likely that there's something they're not telling you or lying about which makes it not advantageous, otherwise they wouldn't offer it to you.
Trying to demonstrate to the subject that some *specific* fact is not being concealed from them won't help. It's not as if having a biased coin is the only possible thing you might conceal.
People trust all the time, but it's when it makes sense to them. If the house offered $40 when you win and charged $60 when you lose, the person is very likely to trust the offer is legitimate as written. Of course someone is going to offer a path that gets them more money. When you offer to lose money to someone, they become suspicious, because that's directly counter to what they would expect. Offering someone a $50 service for $50 is also very trustworthy because it makes sense and is expected. There are lots of ways to build trust, if we can understand the motives of the person we are talking to.
"One of these days in your travels, a guy is going to show you a brand-new deck of cards on which the seal is not yet broken. Then this guy is going to offer to bet you that he can make the jack of spades jump out of this brand-new deck of cards and squirt cider in your ear. But, son, you do not accept this bet, because as sure as you stand there, you’re going to wind up with an ear full of cider."
Yeah, but in Laurence's scenario (the one I replied to) the stranger isn't flipping it. I think the "stranger-approaches-you-on-the-street" scenario is too stereotypically ominous to be useful in a study trying to eliminate trustworthiness as a factor.
if a stranger approached me on the street with that kind of proposition i'd be worried it's a distraction and am about to get pick pocketed / mugged, or at very least it's just a hook to get me started down path of some other scam.
Well, and there's inertia. You have some plans for the day going forward, and they rely on your having a certain amount of money available. If you suddenly have an extra $60, well that's nice, but it probably won't change your plans super much. But it's easily possible for plans to be derailed and require recalibrating (if only a little bit) if you *lose* $40 on which you were counting.
Id est, the brain-dead aspect of some of these experiments feels to me like the failure to appreciate the fact that the subject of the experiment has a history --the moment of his interaction with the experiment feels, to the *experimenter*, like it's just a moment cut out of time, standing by itself. But to the *subject*, it's a moment connected smoothly with all the other moments of his life, and the flow, from moment to moment, has a substantial inertia, because planning takes work, and recalibrating after a disturbance takes work. This would be a very rational -- and utility maximizing -- root of a Status Quo effect (or for that matter of loss aversion itself).
No behavioral economics experiments actually result in the participant leaving $40 poorer or $60 richer, though. For one, ethics committees would never approve a study that takes money from people, no matter how fair or well-informed the bet. Conversely, if half of the study participants walk out with $60 more, that's going to seriously hurt the scientist's budget. The most you'll see in an experiment like that is a raffle where one or a few of the participants win money according to the odds of two bets they choose between, like 90% chance of $10 and 10% chance of $50, or 60% chance of $0 and 40% chance of $20.
I’ve participated in some behavioral economics studies at my university, and they were giving away $30-$50 to every participant. I’m not sure how the funding model works here, but at some level if it’s government funded then society as a whole isn’t worse off so the government should be happy to give extra funds to studies that work like this.
It's still a bad way to study loss aversion though, because people's feelings about "free" windfall money that they've just been handed might be very different to their feelings about "real" money that they have earned.
I see, as a PhD student I might have a skewed impression and my superiors' research grants are smaller than most, but $30-$50 is not an excessive reward. My earlier point about the studies never losing you money stands, though.
There are rules about never letting them leave with less than they arrived, but you can hand everyone $50 at the door and then after 20 mins of questionnaires offer them a bet where they have to give $40 back if they lose..
This is the point of large random samples. Each participant has a unique history, but as long as they all have different unique histories, it won't prevent you from detecting a real effect.
Sure, but the problem here is you may not be able to collect random samples that include people with no plans going forward...because there are almost no such people. That is, you can't average out the influence of preexisting plans unless (1) there aren't any, or (2) they don't correlate with what you're measuring. Since we're measuring how people respond to the prospect of the gain or loss of *money* and money is pretty much part of nearly all plans, that's pretty tough. Maybe you could do it with something other than money.
Agreed and I think Scott's classmates didn't have loss aversion, exactly, it was something more like what you're talking about - they were just guessing the mindset of these people who made the rules. They just assumed the test makers thought this through so that dumb-sounding strategy wouldn't reward people
That would be even more relevant for pre-meds, who rapidly become *phenomenal* at "test-taking" skills -- meaning, sussing out what the designer of the test wants you to say, and saying it.
You must also consider the utility of the bet to the individual. There are people for whom $X (be it 40, 400, or even 4000) represents a difference between being able to feed themselves and their children this month, or starving to death. If you offered them a prize of $1000 on heads, and a loss of $40 on tails, they would be irrational to take you up on it, because the real choice is between a vacation vs. death.
Let’s say you have $20 on you and want to buy a $15 sandwich. The deal may significantly harm your ability to buy an extra $4 drink after that. That variation is really bad for small numbers of cash you have on you. On the other hand, I’d happily take 10^5 of those in my brokerage account.
But that doesn’t matter __at all__ because the constructed survey environments cause people to give fake survey environment answers with nothing one can claim is relevant to a real world case
Loss Aversion is definitely multi-dimensional. A 40/60 bet on a 50/50 coin flip doesn’t exactly equal a $10 gain. Rationality is also a social tool. Loss Aversion has a built in loss of time aversion, loss of interest aversion (i.e. I have better things to do), loss of face aversion (”I don’t want to look stupid”), etc.
> I understand why some people would summarize this paper as “loss aversion doesn’t exist”. But it’s very different from “power posing doesn’t exist” or “stereotype threat doesn’t exist”, where it was found that the effect people were trying to study just didn’t happen, and all the studies saying it did were because of p-hacking or publication bias or something. People are very often averse to losses. This paper just argues that this isn’t caused by a specific “loss aversion” force. It’s caused by other forces which are not exactly loss aversion. We could compare it to centrifugal force in physics: real, but not fundamental.
As you note, it's very important to distinguish the effect itself from the theoretical mechanism we think is underlying the effect. It's of course possible (and I think likely) that the long list of "cognitive biases" will be compressed into a smaller set of principles, but that's a distinct claim from saying the effect(s) used to posit those biases in the first place doesn't exist or doesn't replicate.
Re: the "size" of nudge effects, it always comes back to what your baseline is––and even what the null hypothesis is. While I think it's important to be careful about not overstating any effect, I also worry that sometimes effects are dismissed for being too "small". From a theoretical perspective: if a consensus model says the effect shouldn't exist at all, then any effect size is interesting and important and potentially disconfirms that model (upon replication, etc.). And from an applied perspective: yes, a nudge is no substitute for actually designing a good product, and it's entirely possible for companies to overspend on small nudges––but nudges can still be a useful tool in the toolbox.
To your point, I guess I'd just urge those nudges to be grounded in generalizable principles where possible, and perhaps there's a dearth of carefully articulated, quantitative theoretical models in the social sciences.
On the size of nudge effects, I was surprised to read the claim of how small it generally is because I thought automatic opt-in for employer-sponsored 401(k) plans was one of the more well-known nudges and that the impact of that was enormous, both in terms of total volume of dollars saved and increase in percentage of people participating in those plans.
It's been awhile since I've read the Nudge book, but I also remember discussion of a building design with a stairway on the outside with nice views all the way up and a significant increase in number of people taking the stairs over elevator as a result, but I don't remember the details of that one as well. And behavioral change from making fruit more accessible and dessert less so in cafeteria food presentation. It seems like one would need to debunk a lot of material to argue that nudges are hardly effective.
I'd be curious to know how Hreha arrives at the 1.4% figure. In general, the quotes from his piece create an impression of a marketing brochure poorly written. Every sentence has its own paragraph and big claims are made but are not well supported.
Before: if you opt in to the 401(k) then you get an incentive to save for retirement.
After: you are automatically opted in to 401(k). You can opt out and take the money directly as salary.
If economics worked on pure incentives, then the change would not have any effect. In fact, people are lazy, lots of people don't opt-in, but also don't opt-out.
Ah. I still don’t think that’s a “nudge”, because - let’s say you’re mentally disabled and do not understand what a 401k is - automatic opt in will be good for you because you won’t know enough to either turn it on or off. Obviously that’s not a real example, but “people being game theory” is not true, but that doesn’t prove “cognitive biases” - some might just be dumb. I don’t think “a lot of people don’t either fully understand or fully pay attention to their 401k status, or don’t know how to opt in or out” is a sign of cognitive bias. While it may be an effective policy (I was unable to find any studies either way, oddly), and might be classified as a “nudge”, it’s not really modifying a decision they make - it’s making it for them.
I think it is also important to note that changing the value one way or the other is not costless. In that case we are back to status quo bias. What might be more interesting is how many people change their contributions after some change in the value of the 401k matching or taxes etc
One possible explanation, offered by my daughter when I was discussing this in a blog post years ago, is that switching is costless but deciding whether you should switch isn't. A good rule of thumb in many but not all contexts is that the default will be what most people do — that, after all, minimizes the administrative costs of allowing for people switching. If those other people who did look at the question have mostly decided to opt for the 401(k) it's probably what you should do, so you stay with the default.
If that's it, then it will stop working when and if a government actor who has read _Nudges_ is choosing the default to be not what most people prefer but what he thinks people should do, and the people eventually realize it.
i think it's not loss aversion but just people having 1) high inertia for this sort of stuff (i recently left a job and have been meaning to flip my 401k into an IRA but it's been 3 months and i have yet to get to it - it does take a bit of time probably making some calls and filling some forms) and 2) most people are financially illiterate so may be applying some kind of unconscious chesterton's fence approach here and leave default option on whichever it is because they don't have a strong view that alternative is better.
The 401k nudge probably mostly takes advantage of people's indifference and high transaction costs - it is rational for a person who thinks the authority making a decision is mostly benevolent to go along with the defaults unless that person knows their situation is unusual, has a high need for cognition, or has other pre-existing reason to change things up. Rational ignorance is not irrational if the expected value of learning enough to change the default is less than the cost of learning plus transaction costs.
The thing about the disutility exponent was weird and not clearly related to loss aversion. I think what they are saying is that if you gain x dollars you seem to value it at a utility of a*x^b, and if you lose x dollars you consider it a disutility of c*x^d (for some constants, a,b,c,d). The claim that they make (if I understand correctly) is that d > b.
So is this loss aversion? Well, would you get more utility from gaining x dollars than you would get disutility from losing x dollars? Well it depends on what x is (and on the exact values of a,b,c,d). What you know from d > b is that if x is sufficiently large, the loss disutility is bigger than the gain utility and that if x is sufficiently small, the gain utility is greater than the loss disutility.
I think that loss aversion is generally framed as reffering to absolute amounts rather than utility.
So, 'losses have a higher disutility than gains have utility' is an explanation for why people are loss averse, but not a refutation of the phenomenon of loss aversion.
When economists talk about loss aversion, it's usually in the form of 'people would rather not take a 50/50 bet of either winning $101 or losing $100'.
One explanation for why people wouldn't take that bet is because the utility of gaining $101 is less than the disutility of losing $100.
But that's not a proof that loss aversion doesn't exist. Loss aversion is literally just what we call the fact that they won't take the bet, which is true - they won't.
It's just a good explanation for why people are loss averse in certain domains.
Loss aversion is used colloquially in a bunch of different ways, but formally it means being more risk-seeking for losses than for gains. Convex utility represents risk-seeking behavior, whereas concave utility represents risk-averse behavior. In this example, if d>b (assuming d and b are both positive), the loss utility is more convex than the gain utility, therefore less risk-averse/more risk-seeking.
In case anyone wants a clarification on the difference, x^2 is convex, x^(1/2) is concave. 0.5(0)^2+0.5(100)^2 is greater than (50)^2, so with that utility, you'd prefer a 50/50 at $100 to $50 for sure. OTOH, 0.5(0)^(1/2)+0.5(100)^(1/2) is LESS then (50)^(1/2), so with that utility, you'd prefer to get the $50 for sure over the 50/50 shot at $100.
In my field (economics, decision theory more specifically), it’s used in the risk sense. The third cite on the Wikipedia article, e.g., is a Koszegi-Rabin paper looking specifically at risk attitudes. In other fields it may be used in different ways, but it’s hard to make it, one, distinct from diminishing marginal utility, and two, meaningful, from a rigorous economics perspective without incorporating risk. For example, the Wikipedia article goes on to describe, “Loss aversion implies that one who loses $100 will lose more satisfaction than the same person will gain satisfaction from a $100 windfall,” but “satisfaction” isn’t a thing we can measure. Utility is ordinal, and it can only meaningfully be described as cardinal in a risk setting. We could talk about someone being willing to work more hours to avoid a $100 loss than to get a $100 gain, but that’s no different from having declining marginal utility of money. Similarly, the comment above, about how people won’t take a 50/50 win $101/lose $100 bet. That’s just risk aversion; if we call that loss aversion it’s not a new thing.
The only way I can think of to get a meaningful statement about loss aversion without incorporating risk would be something like, e.g., someone is willing to work up to 9 hours to increase their income from $100 to $200, but willing to work 10 hours to avoid decreasing their income from $200 to $100. I’m not aware of a loss aversion experiment based around that idea, however.
I mean it could mean that the derivative of their utility with respect to gaining money is 1 util per dollar, but the derivative of their utility with respect to losing money is 2 utils per dollar. This isn't really the same as simple risk aversion, which basically means that their utility function has a negative second derivative. It says that their utility function has a negative delta function as its second derivative at 0.
You could for example have someone who wouldn't take a bet to have a 50% chance of gaining $101 and a 50% chance of losing $100, but would rather take 50% odds of gaining $100 than 100% odds of gaining $50 and would rather take 50% odds of losing $100 than 100% odds of losing $50.
This person would be risk-loving unless the risk was distributed across the break even point. This might be an example of loss aversion.
That could happen and would definitely be weird, but I’m not sure why we’d call it loss aversion. Like, if we tweak this so the person would rather take 100% odds of gaining $50 dollars over 50% odds of gaining $100, then it’s the standard loss aversion risk thing I started with, and it makes sense to me that person is loss averse: they’re cautious with gains, but will go all out to avoid loss. But the example where they’re risk-seeking with gains, risk-seeking with losses, but risk averse with the possibility for gain or loss…I’m tempted to say it’s “uncertainty” aversion? As in, I’m fine with I’m definitely getting zero or a gain, I’m fine with I’m definitely getting zero or a loss, but I could get a loss *or* a gain? That’s too much uncertainty, I can’t deal.
One difference between Floyd and the other 10 unarmed black guys per year is that we have full video of Floyd not being threatening, and a long time period where things could have been resolved in some other manner. For the others, it's easy for us to be skeptical and wonder, were they reaching for their pockets? Did the police know them from other incidents? Were they wearing gang regalia? Etc.
And of course when you see it all on video, it is natural to scale things up in your mind, wondering how many of the other incidents were similar, but didn't get filmed for whatever reason.
Well, I sure hope you cherry-picked those quotes from Hreha, because the collective impression I get from them is that he's an axe-grinding narcissist. I'm interested in this stuff, but the obnoxious tone of the quotes has persuaded me I can think of a better use of the next 10 minutes. What a putz.
It sounded to me like he's a consultant trying to sell himself for millions of dollars to corporations, and this is just a salvo about why you should hire him instead of other consultants who use behavioral economics.
I sort of worked in this field for awhile at my old company, and it very much sounds like a discussion focused on the business aspects rather than the science.
The two are very different discussions - anything on in the real world has so many confounds, that you can't just uncritically apply a single theory and expect it to predict everything on its own.
For example, you can never see the result of 'a nudge' on sales in the real world - you can only see the result of 'one more nudge' in addition to the hundreds already in place, including all the ones by all of your competitors. This will never give the same numerical results as a single nudge in a controlled lab experiment, but that doesn't mean the effect measured in the lab isn't happening in the real word.
I think George Floyd to Mary is apples to oranges. Mary is a picture of a probably fictional woman, and her problem is that she's struggling with money. George Floyd was a 10-minute video of a real person, and the problem is that he was murdered by someone who would probably have gone free without all the media attention and protests. I'd also add that the George Floyd incident happened at a strange time in the US: it played into an ongoing story about the relationship between the police and black people, and also a ton more people had suddenly found themselves with free time to go to protests and stuff.
Maybe a better comparison would be: if you watch a short video following a day in the life of a single mother, are you more likely to donate than if you watched a lecture about poverty and single-motherhood? I bet you're at least more likely to have an emotional response. But at that point, is it still the identifiable victim effect? It might be comparable to Hrera's claim about loss aversion: it's real when you turn up the volume knob, but not at "nudging" levels.
Tentatively on the identifiable victim: Maybe inspiring help isn't the same thing as building anger?
The Innocence Project, which helps falsely convicted people get out of prison, doesn't get nearly as much attention as BLM, even though the Innocence Project is helping specific individuals.
I think you have an important point here. There wasn't much 'help' involved in the protests and riots over the summer, mostly a lot of virtue signaling admixed with destruction of property and looting. I am not a big fan of the last two myself, but apparently lots of people find them enjoyable enough to require making it illegal to prevent it. Some people really like helping too, don't get me wrong, but it is probably easier to tip people over the edge of "You kind of like to smash and steal shit? Well, here's a justification to do so! People might even write books saying you were noble to set things on fire!" than the edge of "Spend some more of your own time and money to help someone else."
I think your intuitive reluctance (and your medical school friends) to guess on your test is probably founded in what Ole Peters has been talking about now with ergodicity economics for a while now (individual actors don't experience the average across time).
Very well said -- my feeling pre-Ariely was that behavioral economics was a much more rigorous field than social psychology and I agree that this continues to seems like the case.
Nudges though! My feeling about nudges *as a policy project* rather than as a research project is that they were an intriguing but failed idea. Retrospectively they are a strange thing for governments to focus on. Can nudges help you maintain a park or build a railway or fight a war or administer a social insurance program or do any of the other "big" things that government does? Clearly not! So why were some of the smartest people in US government 15 years ago so focused on nudges? Maybe because, in the US, the real problems were seen as unfixable and so nudging was the best you could do...and so nudges were overstated to the point where they would not just make incremental benefits but solve real problems.
In this sense, I would argue that even if a 1% gain is a big deal in absolute terms, they are a bad use of a scarce resource (political will and executive initiative) that in a sane world would be invested in much higher-ROI projects.
>Can nudges help you maintain a park or build a railway or fight a war or administer a social insurance program or do any of the other "big" things that government does? Clearly not!
I disagree...
Nudges could, in theory, make people litter in your park less (decreasing your maintenance costs), make workers on a railway more attentive to procedure (making building faster and cheaper), remind people to fill out and submit their paperwork for a social insurance program (increasing coverage and efficiency), and get people to sign up for the army (helping you fight a war).
Of course, nudges can't do any of those things ON THEIR OWN, you do need an actual program to do the thing. But nudges can be part effective parts of that program, in the right conditions. Again, their power is that they're cheap additions to an existing program, and can be cost-effective.
And as for 'they went crazy for it then', remember that whenever a new technique pops up, there's generally a ton of low-hanging fruit that it can be applied to for large gains, and then after those are all taken care of the marginal gains of applying it again are less and less. O we should expect to see an explosion of applications when the idea first catches on, and then less use for lower returns later.
But where were the early gains from nudges? American government is multiples worse than peers in some areas, especially infrastructure building (look at ARRA for an example from when Sunstein was in the exact perfect position to make this work better federally!) and whatever effect nudges have had seems to have been drowned out utterly at best.
American government being bad at a lot of things doesn't mean everything they do is bad, nudges could be good but not able to compensate for every other problem the US has in every possible area.
Especially when the baseline we're comparing to is some unspecified list of other governments, and we have no idea whether or to what extent they have also used nudges in their own policy.
There's the British 'Nudge Unit' which started off as part of the government and was then spun-off into its own thing, the Behavioural Insight Team, which is part-owned by its own employees, a charity named Nesta, and the government.
The purchaser was the N.I.C.E., the National Institute of Coordinated Experiments. They wanted a site for the building which would worthily house this remarkable organisation. The N.I.C.E. was the first-fruit of that constructive fusion between the state and the laboratory on which so many thoughtful people base their hopes of a better world. It was to be free from almost all the tiresome restraints – “red tape” was the word its supporters used - which have hitherto hampered research in this country. It was also largely free from the restraints of economy.
"Has anyone discovered," asked Feverstone, "what, precisely, the N.I.C.E. is, or what it intends to do?"
"That comes oddly from you, Dick," said Curry. "I thought you were in on it yourself."
"Isn't it a little naive," said Feverstone, "to suppose that being in on a thing involves any distinct knowledge of its official programme?"
"Oh well, if you mean details," said Curry, and then stopped.
"Surely, Feverstone," said Busby, "you're making a great mystery about nothing. I should have thought the objects of the N.I.C.E. were pretty clear. It's the first attempt to take applied science seriously from the national point of view. Think how it is going to mobilise all the talent of the country: and not only scientific talent in the narrower sense. Fifteen departmental directors at fifteen thousand a year each! Its own legal staff! Its own police, I'm told!"
"I agree with James," said Curry. "The N.I.C.E. marks the beginning of a new era - the really scientific era. There are to be forty interlocking committees sitting every day, and they've got a wonderful gadget by which the findings of each committee print themselves off in their own little compartment on the Analytical Notice-Board every half-hour. Then that report slides itself into the right position where it's connected up by little arrows with all the relevant parts of the other reports. It's a marvellous gadget. The different kinds of business come out in different coloured lights. They call it a Pragmatometer."
"And there," said Busby, "you see again what the Institute is already doing for the country. Pragmatometry is going to be a big thing. Hundreds of people are going in for it."
"And what do you think about it, Studdock?" said Feverstone.
"I think," said Mark, "that James touched the important point when he said that it would have its own legal staff and its own police. I don't give a fig for Pragmatometers. The real thing is that this time we're going to get science applied to social problems and backed by the whole force of the state, just as war has been backed by the whole force of the state in the past."
And while all of these claim worthy-sounding aims, I still get a combination of 1984 and C.S. Lewis's N.I.C.E. from the whole operation. "Working towards innovation in social good" sounds very pleasant, but the methodology also sounds very manipulative.
My explanation for Hurricane Floyd is that it was caused by an assumption from the media that Trump was toast after he mishandled coronavirus and they could let it out into full flower.
My perception of my own loss aversion is that it's not so much losses that I'm averse to, but situations where I feel like an idiot. Losing even a small amount of money will bother me, if I lost it by making a silly decision.
I don't enjoy gambling, because I know that I'll get more displeasure from losing $100 gambling ("I'm such an idiot, why did I gamble?" than for winning $100 by gambling ("whoop de freakin' doo, a hundred bucks, not exactly life-changing money, is it?")
But I know that there are some people for whom this is reversed; the pleasure of winning $100 is more significant than the displeasure of losing $100, and these are the people you'll find filling up casinos. Not all gamblers are idiots who don't understand probability, some of them just have a mildly broken sense of loss aversion.
Maybe it's a question of one's tendency to magical thinking. The gambler probably rationalizes the events exactly the opposite of you: if he loses $100, well, that's not life-changing, and it's probably because of bad luck or [insert random cause here, failure to wear lucky socks, whatever], whereas if he wins $100 he feels like a freaking genius, because he has worked a system the boring old rationals (like you) don't grasp -- he really hit those Hail Mary's he was muttering under his breath, wore his lucky socks *and* underwear, put it all on good ol' lucky 13, et cetera.
Prospect Theory is touted as superior to Expected Utility Theory, and it has more parameters and so can fit data better within any given sample. But Prospect Theory parameter estimates even within subject also don't generalize across choice situations. Estimate them in one experiment, put the subject in a different experiment, and you will get different estimates. One explanation for this problem is here: file:///Users/apple/Downloads/Stewart_Canic_Mullett_2020.pdf
Agreed my problem with prospect theory is not that I don't think it exist in some sense. But I don't think it is a very useful tool to replace what it was meant to replace.
I'm out on behavior economics. It's easy to make an experiment that shows how dumb and irrational people are for not trading away their old coffee mug for five dollars when it's "worth" less than that. In the real world simple heuristics (it ain't broke don't fix it / buy a new one that might not be as good) are very effective and seem to add up to something approximating rational behavior.
For example it's well known that building / expanding roads doesn't decrease traffic congestion - more people drive until the level of congestion has reached an equilibrium. Things like that seem better modeled by rational behavior than anything from behavioral economics.
Not quite sure what your point is wrt to roads - if usage increases to hold congestion constant, you haven't benefited the set of people who were using the roads before but you have presumably benefited all the people who are now using them that were not previously, so expanding the road network has had tangible benefits.
My point is drivers and many other real world actors can be modeled by rational assumptions. Behavioral economics seems more oriented towards clever experiments with questionable validity.
The issue often comes from the way that development is shaped by traffic. If you cut transit times by expanding a road, developers tend to build further from the city center. This increases road usage (as people have longer commutes on average), and the roads will return to the same level of congestion. Conversely, keeping roads the same size (or even decreasing their size) tends to encourage development that is denser and that results in a lower average commute distance.
You could argue that this still benefits some people, as less dense development at the periphery tends to be cheaper than denser development in the core of cities. Thus expanding roads is essentially a type of subsidy for housing. However, it is an extremely inefficient subsidy for housing, as expanding roads is extremely expensive, uses land that could otherwise be used for development, and makes cities less walkable and pleasant.
Obviously there is some sort of middle ground. The above argument doesn't imply that all roads should be single lane roads. However, the world is extremely complicated, and often times second order effects (such as changes in development patterns) are more impactful than the intended effects of an intervention (such as decreasing traffic congestion).
One issue with using behavioral economics (and, to a greater extent, the social sciences) to justify interventions is that they often fail to predict the second order effects of interventions. Often, simple heuristics are more robust to these second order effects than sophisticated but highly parameterized theories (such as prospect theory).
In general, I still think that behavioral economics (and even the social sciences) can be beneficial in guiding government policy. However, I think we ought be conservative in their application, and expect that significant parts of the theories will have to be modified as they transition from lab to practice.
Why should commuters in your example care about distance commuted as opposed to time spend commuting? Surely they wouldn't mind commuting thousands of miles if they could teleport the distance instantly.
I agree that travel time matters much more than travel distance. The point I was making is that if the government makes an intervention to reduce congestion and decrease travel time, but induced demand ends up eroding away all (or at least a substantial fraction of) the gains that were initially realized, then the intervention will be significantly less effective than predicted.
Example: you are a city planner working on a new road. You estimate that the new road will cost $100 million to construct and will save 10 million person-hours of commuting over the next 10 years. This project is thus projected to save time at a rate of $10 per person-hour. This very well may be worth it for taxpayers. However, if things like induced demand result in the new road only saving 1 million person-hours over the next 10 years, then you only save time at a rate of $100 per person-hour. This would seem to be a very poor use of taxpayer money.
Ahh I think I see the problem then, how you measure success. If you are trying to cut commute time for people already living in the city's environs, sure, that isn't so hot. If you are trying to make the city more appealing such that more people want to live and work there, then it was a pretty big success. Or if you were measuring success by "number of people who can get to work in the city within 1 hour" things would look pretty good.
Deciding what goals one wants to achieve is pretty difficult.
I'd actually argue that many road construction projects are suboptimal even when your goal is to make the city more appealing to live and work in. Larger roads make the city more appealing in the near-term, but often fail to make the city more appealing in the long-term. Quality high-density housing and public transit are more capital-intensive than roads, but they also have much larger potential returns. A city that invests more in housing, parks, and public transit, and industry will see slower growth than a city that invests more in roads, but the growth is more likely to be sustainable, and the city is likely to generate more utility in the long term.
Further, there are issues like the Braess Paradox in which building new roads in the wrong location can lead to longer travel times for everyone, even without induced demand.
Rational behavior implies that expanding roads does decrease traffic congestion, just not by as much as it would if people didn't respond by using the road more. Isn't that obvious? The reason more people are using the road is that it is less congested. If it were not, total demand for use of that road would be what it had been before the expansion.
On the quantum scale, particles behave unpredictably. On the large scale, they very closely approximate classical physics, and quantum physics doesn't matter. If you could manipulate behavior at a quantum level and at scale, you could change how classically-sized things behave. But in practice, quantum physics lets us build weird computers and does little else.
We could throw quantum physics out, model everything classically, and retain nearly everything that matters. I don't think this is a good argument for doing so.
Understanding how a thing really works is important. It helps you predict how a system behaves in extreme scenarios, it gives you extra leverage for controlling outcomes, and, insofar as science is the pursuit of knowledge, it's the obvious next step. You may not need to understand relativity to design a car, but that doesn't mean studying relativity is a pointless exercise.
Yes I agree with all this, but evolutionary psychology stuff I’ve seen about heuristics seems better at getting to the “foundations” of behavior. (Also it’s been a while since I looked at microeconomics stuff but my understanding is a lot of this is an active area). Behavioral economics seems to be mainly about poking holes in “rational behavior” with simplistic experiments with little real world validity. That coffee mug experiment (I’m too lazy to look up the details but pretty sure there have been experiments like this) tells me close to nothing about the real world. In the real world there is no perfect knowledge about how much something is worth, things are almost never exact substitutes, counter parties are not completely trustworthy, etc etc. So generalizing from that kind of experiment is likely to lessen our knowledge of the world, if anything.
Kind of like social psychology, I feel like much of that field (or the research the media has picked up on) has actually lessened our understanding of the world. More like astrology than quantum mechanics.
I think I should expand what I meant because maybe my example wasn’t phrased very well. What I meant was: imagine a two lane highway connecting a suburb to a city. In perfect conditions with no traffic, it’s a 15 minute drive. On 9AM on an average weekday, it’s a 45 drive because of traffic during rush hour. Some people who are very time sensitive change their work schedules to go in later, around 10, other people stay home and work from home. People plan their grocery shopping etc for later in the day when there’s less traffic.
The road is expanded to three lanes, the planners naively expect traffic to decrease 50% and commute times to decrease because of increased capacity / less traffic. But instead the commute times are still around 45 minutes - more people use the road now because fewer are staying home / changing their schedules to work later, etc.
Probably I should have used a simpler like prices - higher prices mean people buy less of something.
Even something like "now there's a new road it's easier and quicker to get to the city, so now it's worthwhile for people to apply for jobs there/new factories or businesses to set up in the city because they can increase their workforce" and now you have more people wanting to travel in to work at 9 in the morning, so the new road gets clogged up the same as the old one.
"Gigerenzer investigates how humans make inferences about their world with limited time and knowledge. He proposes that, in an uncertain world, probability theory is not sufficient; people also use smart heuristics, that is, rules of thumb. He conceptualizes rational decisions in terms of the adaptive toolbox (the repertoire of heuristics an individual or institution has) and the ability to choose a good heuristics for the task at hand. A heuristic is called ecologically rational to the degree that it is adapted to the structure of an environment. "
He's mentioned in "The Undoing Project" by Michael Lewis. Apparently Kahneman and Tversky hated him. No wonder. He provides a simpler but far less dramatic explanation than they do.
His books are well worth a read.
What's described in the article above shows that there are more complex heuristics for loss aversion. It's not that people are illogical, but instead that they are working in a world of constant uncertainty and you know more about what you actually have.
Would you consider doing a piece that highlights some meaningful good science?
I’m sorta at the point where my trust in academics is at an all time low. It might be helpful to signal boost ethical people doing good work.
I remember at one point I had to stop reading Andrew Gelman’s blog because it’s just so demoralizing hearing how my tax dollars are fueling these petty narcissists to propagate lies.
There's good science out there, but it's probably too technical and not people-oriented enough for this blog? For instance, computer science is a mix, but there are quite a lot of genuinely insightful papers out there in sub-fields like computer graphics, AI, systems engineering etc. On the other hand there are also sub-fields that are a bit more questionable like, erm, AI, also advanced theoretical cryptography has subtle issues that a lot of people aren't familiar with.
I think what these better papers have in common is that they're about inanimate things instead of people. The moment people or even worse, large groups of people go under the microscope it seems reliability goes to hell.
Sorry to pile on your disappointment, but yea, most social science research is low value. Without a good way to test the implications of theories in the real world, most social science boils down to "What do we WANT to be true?" That isn't limited to social science, of course, see e.g. climate change, but social sciences are just steeped in the problem. Even those trying to leverage statistics and other more rigorous methods run into the problem of "The Dreaded Third Thing", just having missed accounting for one variable that would have changed the entire outcome. And of course the bigger problem that when you want to find something is true, given enough data, statistical methods and time, you will find a statistical analysis that says it is in fact true.
If that sounds too unoptimistic, consider that social sciences don't even seem to be very good at describing what people actually do, much less coming up with ideas of how to change it. My personal favorite in this space is the statistic that 1/4 women are sexually assaulted in college. Obviously this is false without having a really wide band of what "sexually assaulted" means, which turned out to be basically anything unwanted like being bumped into exiting the bus. That is bad enough on its own, but it was taken seriously as the stronger definition of sexual assault. Social scientists that behave as though they believe that 25% of women get raped in college, and their parents still pay thousands of dollars to send them there, are not doing a good job understanding the social world around them.
Hence everyone repeating about a crazy guy who shot up a pizza parlor without nuance for the fact that he didn't shoot any person and wasn't trying to shoot any person. Yes the guy is crazy, but saying someone going out and "shooting up" a place has an entirely different context.
I mean literally every single thing you touch was made or enabled by a technology that depends indirectly on hundreds of thousands of pieces of basic research. The chair you sit on was made by a machine who’s steel was optimized using mathematical basic research and physical knowledge of steel gained via microscopes and atomic theory and thousands of lab tests of steel composition and strength... etc, you can go down millions of different paths like that
Any big hard science or technical journal available now, nature, or even arxiv, etc, will have piles of good science. I skim a few papers a day on average just for fun and a lot of them are good science!
I think the world around me has been built by research conducted mostly by industry. Not much of what's around me has been informed by the academic publishing complex. I'm sure there are papers but those are more than likely tatamount to explaining to birds how how they fly.
This view is informed by the scores of academics I've met who, with the exception of material science, never seem to hold their own research area in high regard in terms of applicability.
I don’t think this is true at all. Directly, you’re right. But the industry builds heavily on basic research and academic research. The people discovering the mathematics and logic and structure and number theory and calculus were at universities. Physics professors did the basic quantum research, nuclear physics, bio, etc. same goes for pharmaceuticals and biotech stuff - the industry builds ridiculously heavily on basic research in academia. Materials science obviously, all sorts of chemistry, experimental physics - universities do LOTS of research. Yes, they aren’t scaling up production lines. But the production lines wouldn’t be there without the many many different studies and groups at universities across the globe doing work.
You might do well to read Matt Ridley's "How Innovation Works". For nearly all of human history, researchers were trying to figure out why what engineers did worked, as opposed to engineers learning from researchers and just scaling things ups. There is of course some of both, but more often than not people making things are tinkering around at the edge of knowledge and tying together lots of different ideas and stumbling on clever things.
No, actually quite the opposite. You are putting a lot of weight on universities, and I am arguing that very little weight goes there. You said "I mean literally every single thing you touch was made or enabled by a technology that depends indirectly on hundreds of thousands of pieces of basic research." That is simply false, and pointed you to a book that does a nice job demonstrating that. Then again, perhaps reading isn't your thing.
Yea this checks out. In the world of finance, academics are totally useless. There's some academic influence in some certifications like the CFA and in MBA programs but those theories were largely developed as post hoc explainations of market participants.
The current instatiation of Academia doesn't seem all that valuable other than a jobs program for overachievers.
Sounds like bullshit, or at least myopia, to me. Quantum mechanics wasn't worked out to explain the workings of a transistor invented in 1890. Watson and Crick did not noodle out the structure of DNA to explain genetic engineering already being done commercially in the 1940s. Einstein did not work out relativity to help improve the otherwise inexplicable failure of a GPS system to give sufficiently accurate positioning. The math of the quantum double well was not solved in order to figure out how the laser worked. Nobody worked out the theory of X-ray crystallography so they could figure out what the HIV protease looked like to explain how the engineers came up in some ad hoc way with these marvelous new protease-inhibitor drugs that turned AIDS from a prompt death sentence of a chronic and somewhat manageable disease in the 1990s.
Oh Carl, you are such a charmer! Let's go through your examples:
1: Transistors and quantum... ok... Quantum wasn't worked on to solve much of anything of practical use that I can tell. What's your point?
2: Genetic engineering is one of mankind's oldest tricks. You might have met a dog, once. Perhaps seen a horse, or corn. We have been shaping animals and plants since time immemorial, literally. Clumsily, yes, but it didn't start because DNA was figured out.
3: Relativity was one of those things pretty far ahead of its time, that is true, positing ideas that would have to wait a long time to be tested.
4: Would they have thought about the double without the laser?
5: Ok, this one... I don't even know where to start with this sentence. Sure, let's give you this one.
So you have say 3 of 5 examples, all in medicine or theoretic physics. Hooray! Medicine isn't a bad one for your case, as much of the basic research does tend be of the shape "What happens if I we stick X into Y?" It also tends to be of the shape "Huh, X doesn't react to Y the same way Z does... why is that?" which isn't so great for the case.
Now let's look at everything that isn't physics or medicine.
I think this is a very rose tinted view of academia and doesn't jive with how any of the companies I've worked with view R&D. Nobody is looking at published literature to inform where engineering effort should be spent.
The academic welfare complex looks more like Afghanistan where the goal is endless expenditure, not replication, applicibility, or the truth. The members of this welfare axis includes journals, universities, and media who respond to their own incentives of attracting eyeballs and prestige. Industry doesn't need any of these actors to survive and thrive.
Kind of depends on what the companies for which you've worked are doing. If, for example, they were at all involved in drug development, or medicine in general, or biotech or advanced materials, or pushing the envelope in chip design, or chemical engineering, then this would be a deeply silly attitude and they would not succeed.
For other areas of the economy, sure, the role of basic research is so far removed it makes no sense at all to pay close attention -- you'll hear about what's important 20-50 years after it's done. For still other areas, it would be nuts because the academic sector in question is corrupt or useless.
Perhaps the companies at which you personally have worked are not a representative sample of all industries?
You might find that your example industries are a very small sample of industries as well... you might be taking behaviors from that small chunk and misapplying them elsewhere.
Absolutely lots of academia is terrible and lots of areas of industry are poorly served by academia. But to imply that means no useful work in {computer science, engineering, physics, chem, biology} and literally thousands of sub fields ... I just don’t get it at all. What? Go check out the latest issue in any journal, or look at the history of literally any popular technique
Like, crispr cas9. Industry ... benefits from it. Discovered by “ In 2012 Jennifer Doudna and Emmanuelle Charpentier published their finding that CRISPR-Cas9 could be programmed with RNA to edit genomic DNA, now considered one of the most significant discoveries in the history of biology”. Jennifer was at Berkeley and Charpentier was at (I think) Ümea Unciersity in Sweden. Jennifer actually took leave from Berkeley to lead research at Genentech two years before she discovered the use of CRISPR at Berkeley, but left after two months.
And I randomly picked CRISPR.
Another random one:
Knoll was born in Wiesbaden and studied in Munich and at the Technical University of Berlin, where he obtained his doctorate in the Institute for High Voltage Technology. In 1927 he became the leader of the electron research group there, where he and his co-worker, Ernst Ruska, invented the *****electron microscope*****in 1931.[1] In April 1932, Knoll joined Telefunken in Berlin to do developmental work in the field of television design. He was also a private lecturer in Berlin.
The particularly stupid thing here is clearly there’s massive working together and collaboration as transfer of ideas and people from academia to industry. And it seems to work well. So I genuinely don’t know how one could say one makes discoveries at the expense of the ofhe.
Also all that completely ignores the fact that universities educate most of the people who then go on to *do* industry research and many of the people who go to industry take with them discoveries from universities!
And TOTALLY IGNORING BASIC FUNDAMENTAL DISCOVERIES the hundreds of millions of papers that are available and published absolutely inform industry. And everyone else. I mean, go to literally the next Scott article, or the one after that, he cites dozens of published papers he clearly sees as worth reading. The blog has probably cited thousands of published papers for the past ten years. As have many other blogs.
Goodness, no. To take only the most salient example of the forest of electronics in your life, *all* of that is reliant on the experiments on the structure of the atom at the turn of the 20th century -- all done in universities, in areas of pure research that appeared at the time to have no conceivable commercial value -- followed by the immense discoveries in theoretical physics in the 1920s and 1930s, which, again, appeared to have absolutely no practical importance at the time. It was only another 10 years after that, in the 40s, when people like Bell Labs got involved and started wondering whether it was possible to use these ideas to build a "valve" a heck of a lot smaller than vacuum triode...and then the good folks at Fairchild thought, gee, we could put a bunch of these together on the same piece of silicon...
It's certainly a good long way from the basic research through the applied R&D to the commercial product, but just because the originating work is well in the rear-view mirror doesn't mean it wasn't absolutely key to the whole process in the first place.
That doesn't mean *no* basic research is done in industrial settings, but alas the amount that actually is, these days, is a tiny fraction of what there used to be. (BTW research in improved use of neural networks, what Google calls "AI research", is by my definition very applied. They're not inventing entirely new ways of mimicking the human mind over there, just applying the model already known for decades.)
I should add most people in academia deplore the decline of the big industrial research lab -- the heyday of Bell Labs or IBM Yorktown Heights or even Exxon Annandale. These places are not what they used to be, and it's very unfortunate, as there was a lot of useful cross-fertilization, because we all used to go to the same conferences, read the same journals, talk to each other, and the differering fundamental viewpoints brought a lot of life to the discussions.
I should also add that in biology it's still probably more the case that a lot of basic research goes on in commercial settings, because of Big Pharma. But even there...I feel like there's a tendency to outsource that to little start-ups that they can buy (or not) once they have something promising.
So, still, overall, I feel like there is something screwy with our general social environment that makes original research in the commercial setting less viable than it used to be -- this is a loss.
Much of neural network innovation is just making new chips and distributed architectures that can together train at 10^aaaaa floating point operations per second. “ GPT-3 175B model required 3.14E23 FLOPS of computing for training.” Which, lol that’s half Avogadro’s number. And your post is better argued than mine!
I don't think the industry thing is as true as you think. There's a pipeline from academic research to industry. You figure something out in a grant funded academic context, and then you found a startup on the side, or quit to found a startup, or look for an industry partner to license your stuff.
My university has an office set up to make these sort of licensing deals. The creators get a slice, the university gets a slice and the licensing office is funded by a slice. I'm pretty sure this is common. Probably a lot of stuff that looks like it's industry sourced started on a campus somewhere.
Why would you want to boost your trust level? I don't trust a damn thing anyone says just because of the letters before or after his name, or even how famous he is. If it's important to me, I read the paper, and it better convince me on that basis alone, from its internal data and argument, even if I didn't know who wrote it or where it was published. If it's *really* important to me, I better be able to replicate its key results myself. Those are the only bases for trust when you're an empiricist, which is what I am. Trust and $3 gets you a cup of coffee. It has no serious role at all to play in science.
It's worth observing that George Floyd's death was videotaped and VERY extreme. You can imagine a police officer, in the heat of the moment, accidentally shooting a man he believed was a threat, making an honest mistake. You can't imagine kneeling on somebody's neck for several minutes while he begs for his mother to be an honest mistake. Comparing that to some bad experimental charity ad is sort of like comparing The Godfather to a home movie I made in elementary school. One is going to have a bigger impact than the other.
The video of Philando Castile getting shot seemed more egregious to me, but that cop was acquitted. Since the video also showed Floyd insisting he couldn't breathe when nobody was touching him and he was just sitting in the back of car asking to be let out so he could lie down, it would be easy for a cop to dismiss his subsequent claims that he couldn't breathe as not being due to their actions either.
<blockquote> it would be easy for a cop to dismiss his subsequent claims that he couldn't breathe as not being due to their actions </blockquote>
The fact that Floyd was already struggling to breathe doesn't exonerate Chauvin in the slightest. That's literally the worst time to place your knee on someone's neck (I'm no doctor, but...). Chauvin knew he was likely killing Floyd.
My 9th grade (I think) math teacher spent a bit of time with us on SAT prep. Back then (the late 80s) the SAT was all multiple choice. Right answers counted for 1 point and wrong answers counted for some fraction of a negative point. If you straight up guessed randomly for each question you'd do worse than simply not answering. But what my math teacher pointed out was that if you could eliminate 1 possible answer and _then_ guess, you'd overall increase your score, at least statistically. I took this to heart because it was mathematically obvious, but I wonder how many other students in my class did the same.
(Note that I might be misremembering the exact details. Maybe you had to eliminate two answers and then guess? It's been a while!)
The SAT actually removed the penalty for wrong answers in 2016.
Before that you got +1 for a right answer and -0.25 for a wrong answer. That means that with 5 options, your expected value of guessing is 1/5 * 1 + 3/5 * -0.25 = 0. As soon as you can eliminate 1 answer, then the expected value is 1/4 *1 - 3/4 * 0.25 = 0.0625 which means guessing out of the 4 questions has positive expected value.
Your details are right. Pure guessing was designed to just be nothing.
Other people had to be begged to take your teacher's advice to heart; they were terrified of losing a point.
Just getting rid of the penalty entirely was probably the right move. The only place it'll make a difference is someone who runs out of time and doesn't have time to fill in every remaining test question with a "C".
At a tangent ... my approach to this issue, as a teacher giving non-multiple choice exams, was that a wrong answer got no credit, leaving the question blank or writing "I don't know" got 20%. That was mainly to discourage students from wasting both their time and mine bluffing, trying to sound as if they sort of knew the answer when they knew they didn't.
Forgive my ignorance, but isn't loss aversion just another way of saying that money has marginal utility? If I have $10k in the bank and need it for say a car down payment next month, then losing $10,000 is going to be way more painful than the gain of winning an additional $15k. Isn't that obvious? Haven't economists known about marginal utility since the marginal revolution or am I missing something?
As for the nudging example, I don't know why economists would be surprised to learn that incentives matter. If you reward people to do x, they're more likely to do x. Is that behavioral economics or just economics?
The work on framing of choices for e.g. how much to tip is interesting, although having worked in marketing, specifically in the role of trying to optimize websites for profitability, the idea of framing prices is very well known, although I don't know if marketers or economists figured it out first. There are many (infinite?) variables on a page and they can all have some effect on the rate of people who complete some action, and most tech companies with a lot of traffic have a formal A/B testing program to figure out the best possible configuration of elements on a page or in a sequence of pages.
I suppose that economists could point out various instances of "irrationality" in customer behavior, like Scott's tipping example, but sometimes the rational choice is to just follow the usual pattern and go with the flow.
This feels like a case of economists assuming (incorrectly) that people are only optimizing for money, which seems silly. When tipping, for example, part of the equation is "how much time should I spend thinking about this?" but also "what will the driver think of me if I tip x".
There's an additional factor, which is "how much is it culturally appropriate to tip for this kind of service?" A lot of people, I suspect, will take their cue on this last question from the options presented to them by an app, which in some cases can lead to some odd things like tipping the Uber driver more than the Grubhub driver. But I dunno, saying "the way options are presented to people will affect which option they choose" just seems like something that's rather obvious.
"Forgive my ignorance, but isn't loss aversion just another way of saying that money has marginal utility? If I have $10k in the bank and need it for say a car down payment next month, then losing $10,000 is going to be way more painful than the gain of winning an additional $15k. Isn't that obvious?"
And what about losing vs. gaining $5, when you have $10k in the bank? The marginal utility doesn't change much between $9995 and $10,005.
Yes, but the effect is tiny between $9995 and $10,005: ln(10005/9995) = 0.001. Or to use the example Scott mentioned, if you're a millionaire faced with gaining or losing $20, that's ln(1,000,020 / 999,999,880) =4e-5. If you still see loss aversion that's stronger than that, and the studies do, the logarithmic utility function can't explain it.
You don't get to be a millionaire, on average, by gambling $20 here there and everywhere. Gambling aversion is probably strongly positively correlated with net worth.
Loss aversion is very poorly named, because it doesn't actually refer to a generalized fear of losses. The standard expected utility model accounts for the fact that different people and entities have different levels of aversion to *risk*, for exactly the reason you mention. So the standard theory already easily explains that there will be plenty of people who chose $0 with certainty over a 50/50 chance of -$40/+$60. Even though it has a positive expected*value*, it may not have a positive expected *utility* depending on how quickly your marginal utility of money declines. That is indeed obvious which is why the standard theory already incorporates it.
To have loss aversion, you have to be more averse to losses than is explained by risk. So to do those experiments, you have to confront the subjects with different choices that have the same level of risk, but have different mixtures of wins and losses and see if they behave inconsistently.
In any case, like most of these experiments, the experimental set up is pretty artificial and the choices on the real world are rarely irrational. For example, people generally have locked in levels of expenses in the short run but freedom to do whatever they want with windfall gains, so the consequences of losses and gains aren't symmetrical. There aren't very many situations in life analogous to someone handing you $50 and then offering you a bet with lower losses.
>You didn’t need Kahneman and Tversky to tell you that people sometimes make irrational decisions...
I dunno, maybe I do.
It's always seemed to me that quite a lot of what people describe as irrationality can also be described as cases where heuristics that roughly approximate rationality happen to conflict with more careful reasoning- like how loss aversion of large sums roughly approximates Kelly betting, or how that famous study about people deferring to an apparent group consensus about which of two lines was longer roughly approximates outside view reasoning.
It also seems to me that the question of when a person should rely on these heuristics may be more complicated then just "use careful reasoning when precision is required and time allows, and rely on heuristics otherwise". When a lot of people rely on the same heuristics, it's pretty easy to see what sort of risks those heuristics entail- but when you act on your explicit reasoning, you're inventing something new, and the risk of relying on that can be harder to judge. Sometimes, you know that a heuristic is particularly risky in a given situation, so ignoring that in favor of a reasoned alternative is the obvious choice. Other times, you can see that relying on a heuristic is very low-risk, so that while relying instead on careful reasoning might lead to a better outcome, doing so involves taking on more risk. An astronaut who follows a checklist is safer than one who invents their own procedures, even though the checklist is only a rough approximation of the ideal way to fly a spaceship.
So if it can sometimes be the correct instrumental choice to rely on a heuristic over reason, does it make sense to single out any negative consequences of that choice and say that they're the result of "irrationality"?
To some extent, of course, that's just a semantic question- but I think our ordinary use of the word can lead to real confusion. Person A might say "I've reasoned carefully about this situation, and my conclusion conflicts with your intuitive judgement, so I think your judgement is irrational." And then Person B might be like "I'm relying on an empirically very reliable rule of thumb, and I believe I have good reason to trust that more than your or my understanding of the particulars, so I think my judgement is not irrational." A and B might believe they have a disagreement, when in fact they're just each using the word "irrational" to describe different things.
So, that's the issue I have with "people sometimes make irrational decisions". I'm not entirely convinced that "irrational" as it's commonly understood is a natural category- it seems to conflate things as dissimilar as mistakes and the negative consequences of useful heuristics, and imply a non-existent common cause. I think we may need a completely different framework that would consign our current use of that word to the same bin as "phlogiston" and the four humors.
You’re right that “irrational” isn’t really a thing, and even “heuristics” might just be a case of “making a mistake a few times the understanding the situation better afterwards”, which isn’t a sign of global irrationalism at all
Well said. But I dunno if the best approach is inventing a new psycho-econo-babble vocabulary word, maybe just have a lot of footnotes the first few times you use the word "irrational." It's a little easier to learn a new connotation for an existing word than an entirely new word...er...at least it is at my age ha ha.
Regarding George Floyd vs. Mary the Single Mother, the confounding factor could be media saturation. Mary the Single Mother is some anonymous woman from a stock photo. George Floyd is a character backed by seemingly endless media campaigns propagated by organizations with deep pockets and vast amounts of man-hours to spare. It's only natural that Mary would pale in comparison.
It's worth remembering that the George Floyd incident was preceded by a number of other carefully amplified incidents designed to inflame racial tensions.
Two weeks prior it was Ahmaud Aubery, the "jogger" who was shot by some neighbourhood watch types.
One week prior it was a slow news week so it was that argument about that dog in Central Park.
If George Floyd hadn't started the riots, then another cause celebre would have been found. the following week.
I noticed the other day that the scale at one of my favorite places ran up to 35%. I have always been a generous tipper; I figure that for very small orders (a single aperitif before dinner at another restaurant, a coffee, etc.) 30% was appropriate, and I don’t tip less than 20% for restaurants unless the service is quite poor.
But 35% is crazy. I have a lot of waiter friends, and I know how much they make. It’s like $25 to $35 an hour, and they don’t pay taxes. Bartenders make even more, most places. I’m not sure what’s going on there, but I will sooner stop dining out than pay 35%.
I would keep my Denver quarter and never exchange it for a Philadelphia quarter, but this is simply because Denver is a vastly higher-quality city than Philadelphia. If I had been given a Philadelphia quarter initially, I would've leapt at the chance to exchange it for a Denver quarter.
Strange, I had exactly the same reaction, even though I’ve never been to Denver, and only been to Philly’s airport on connecting flights.
Upon examining my “reasoning”, I think it’s because I’d just rather live in Denver, if forced to make the choice in some BE experiment, or in a supervillain’s lab with a gun to my head.
> It sure seems people cared a lot when George Floyd (and Trayvon Martin, and Michael Brown, and…) got victimized. There are lots of statistics, like “US police kill about 1000 people a year, and about 10 of those are black, unarmed, and not in the process of doing anything unsympathetic like charging at the cops”.
This is a really weird example statistic to provide right after postulating that Michael Brown "got victimized".
> Somewhere there’s an answer to the George Floyd vs. Mary The Single Mother problem.
My take is basically that George Floyd was picked as the mascot for something that was happening anyway. His causal effect is something near zero - rather, people who were looking for something happened to find him.
My explanation for George v. Mary: the Mary ad is "You should help her." The implied George ad is "you should help all possible future Georges." They don't seem at all commensurate.
The implied George ad is more like "Take revenge on your outgroup for what they've done to a member of your ingroup", which seems even further from the Mary situation.
People love to attack their outgroup and are always looking for an excuse to do so, people hate putting money into an envelope and sending it to charity and are always looking for an excuse not to do so.
"G&R are happy to admit that in many, many cases, people behave in loss-averse ways, including most of the classic examples given by Kahneman and Tversky. They just think that this is because of other cognitive biases, not a specific cognitive bias called “loss aversion”. They especially emphasize Status Quo Bias and the Endowment Effect."
As Scott implied, these seem more like explanations of the mechanism behind loss aversion than a refutation of loss aversion. It's like if someone observed a rainbow, and someone else explained the rainbow as the result of refraction of light by water droplets. The physical explanation is a confirmation of the existence of rainbows, not a refutation!
It's not quite this innocent, because some people use loss aversion as an explanation, as if it had causal power, which the new evidence suggests might be wrong. Prospect theory also leans pretty heavily into a form of loss aversion that isn't just other stuff.
For a metaphor which I think is really on the mark, but couldn't bring myself to put in the main post, see my old essay Against Murderism https://slatestarcodex.com/2017/06/21/against-murderism/ , which makes the same point about whether racism is causally real or epiphenomenal. I think this makes it clear that there's an important difference.
A big chunk of what behavioral economists call "loss aversion" is probably what normal economists call "marginal utility". As you alluded to at one point, the college students who are asked about 50/50 chances to win $60 or lose $40 are highly likely to be thinking something like "If I win $60 I get to eat a nicer meal or two, but if I lose $40 I can't fill up my gas tank this week".
Put more generally, the value of your last $100 is considerably higher than the value of your 10,001-10,100th dollars, and loss aversion includes that factor.
Given the result with millionaires and loss aversion down to $20 I'm guessing there's some part of loss aversion that is separate from marginal utility, but it's at least a big chunk of it.
Or possibly there's just a high overlap between people who don't like losing or spending money and people who have high net worth.
I definitely think that there's overlap between wealthy people (as distinct from high income!) and a dislike of spending or losing money. I'm curious if studies on billionaires would show much greater risk tolerance though - a millionaire is usually someone who scrimped and saved, a billionaire is always someone who gambled big and won.
This is an issue with Scott's example and description. What you are describing is *risk aversion*, which is generated by a declining marginal utility of money. And that is definitely a part of standard expected utility.
To have loss aversion, you have to be more sensitive to bets of *equivalent risk* that involve different levels of wins and losses.
I was going to say the same thing. Risk aversion is standard Neoclassical economics. Strictly preferring not to take fair bets can be described by maximizing a diminishing marginal utility function over income or consumption.
For loss aversion to be distinguishable more than a kink in the utility function requires that the preferences change when the reference point (the point that defined what is a loss) changes.
This is definitely some of it, but loss aversion partisans (including the Mrkva study I linked) argue that there's more, and that for example millionaires display loss aversion on bets of $20, which it's hard to argue is a marginal-utility effect.
It would require some very strange assumptions over the curvature of the utility function, indeed. Not strictly evidence rejecting the classical axioms, but very plausible.
But I think the complaint/suggestion is terminology. "Risk aversion" per se is not a departure from the Neoclassical axioms, so it shouldn't be considered part of behavioral economics.
Not "marginal utility" but "declining marginal utility of income." The loss aversion claim is that the effect exists even for very small amounts, which shouldn't change marginal utility of income significantly.
FWIW I am a strong believer that basically all food delivery people (outside of maybe for a party?) do roughly the same job so I make it a point to always tip the same amount without any consideration for the cost of the food I ordered. This amount used to be 3 dollars but I recently increased it to 4 dollars since it's been about ten years since I started ordering delivery food and I figured I should try to keep pace with inflation. With the newer food delivery services though the service range is often very large so I will tip a bit more if the estimated drive time gets above 20 minutes.
By that reasoning, when you go to a store to buy something, the store's profit should be mostly the same for each item (maybe a little more if the more expensive item requires more storage space). Nobody believes this.
A tip in theory is a bonus based on good service, the idea of tipping delivery people started on the bases that they would delivery the food more quickly but as it became standard practice it got baked into their compensation. I think a more analogous example would be paying employees the same wage for doing the same job which I think is a broadly popular opinion, vehemently so in the case of women. Honestly your analogy makes no sense to me. It is not generally considered that the store is providing me with a 'having a shelf for goods to sit on' service that I am expected to pay them for, and nobody tips the 'service' employees working in stores, for some reason we are perfectly happy assuming that they should just always provide excellent service.
The thrust of my original post is that in restaurants a 20% tip in theory scales because at more expensive restaurants you are getting a higher quality of service, I don't really buy into this but it is at least reasonable. So when I eat a $100 dollar dinner the waiter makes $20 dollars, on the other hand if I get a $10 dollar dinner the waiter is looking at $2 dollars, was the first waiter really working 10 times as hard/well, probably not, but I will bite the bullet on that social norm. The Uber Eats delivery person putting a $100 dollar bag of food in their car and driving it to my house is doing a job that is 100% indistinguishable from the Uber Eats delivery person putting a $10 dollar bag of food in their car and driving it to my house, so I tip them the same.
Mary the single mother vs american single mothers just looks like scope insensitivity to me? Admittedly I don't know how large an effect you usually see from that when comparing options side to side. Or is scope insensitivity also fake?
I don't think it's scope insensitivity because you're donating $X to charity either way, which will presumably go to some single-mother-related cause. I'm not sure anyone expects their donation to actually go to Mary personally. Also, I don't think the ad was making the claim that your $X would actually help all single mothers in America to the same degree it would help Mary personally.
On identifiable victims: sometimes I write about a big problem and it feels stronger (emotionally) to write in detail about a single instance so the reader can make a personal connection, and sometimes it feels stronger to write with big numbers to emphasize the problem’s scope. They’re different tools that do different things, as I’ve learned in every writing class since high school.
It seems insane to me that anybody would summarize a distinction like this as either “there is a constant ‘effect size’ that makes personal connections stronger than numbers” or “the effect doesn’t exist, so they’re both the same”.
Kind of on the same level as asking whether making right turns or left turns got drivers closer to their destination — it may be a fun anecdote if one turns out to be more common, but if you really want to learn something the heterogeneity is the whole point.
I do recall reading some study that used a narrative about endangered geese in Russia (?), which claimed that the amount people were willing to spend was completely independent of the number of geese that would be saved - they just saw "big number" and it didn't matter if it went up or down by a factor of 10. Wish I could find the link...
On the fun anecdote level, I believe transport companies actively plan for only easy turns (right in the USA, left in the commonwealth) in their routing, because three turns that you can do at a red is faster than waiting for one tough turn, so if you measure "closer" by time then there really is a huge difference between right and left turns, especially for unwieldy vehicles.
Jug handles explicitly trade off space (they require more space than a "Conventional" left turn from the center lane) for safety. (Then they compromise the safety aspect with some space-saving designs).
The space lost to a jughandle is always more than the space gained by not having a turn lane; and it's "more valuable" land, in that an additional lane takes a chunk of frontage, but a jughandle takes land from the intersection corners and much further "back" from the road.
Jughandles are (usually) functionally at-grade (sometimes partial, sometimes full) cloverleaf interchanges; and rarely other sorts of highway interchanges.
(I live in NJ, and as an immigrant to NJ from another state, jughandles are one of the things I wish was not NJ-specific)
I think it's interesting that you miss a possible explanation for your medical school friends' behaviour – honesty. The 'don't know' option is clearly there to emphasise that it is better, as a doctor, to admit you don't know something than to guess. They didn't align the scores correctly for that, as it's a secondary goal of the test, but I think many if not most people would understand the point and be reluctant to guess wrongly as it has a certain unethical feeling – it certainly does to me.
The idea that the test was just an optimization problem for point scoring and did not have any ethical ramifications can be a blind spot of behavioural economics and rationalism in general. You try to dismiss this with 'the average medical student would sell their soul for 7.5% higher grades' but I don't think they would.
I read the linked post and loved it. But the comparison with straight multiple choice isn't quite fair, I don't think. The most impeccably honest person isn't adding any scruple of honesty by leaving a multiple choice question blank. That only decreases the expected correctness! But if there's an explicit "I don't know" option...
Come to think of it, maybe the inconsistency is in failing to answer "I don't know" for literally every question! Probabilities are strictly between 0 and 1 after all.
Not that I'm seriously arguing against the point. Of course one should never answer "I don't know" under that scoring system.
You touch on it in the linked post, when your friend says the test is supposed to measure knowledge, but you still consider his primary motivation loss aversion, with the moral aspect as a rationalisation. If it's loss aversion, both of you see the test as a maximisation game, but his loss aversion causes him to be unable to rate wrong guesses properly, which he explains post facto with a moral argument.
I would suggest that the inclusion 'don't know' creates an ethical or social dimension not present in the other test, and that most people wouldn't see it as just a points-maximisation game. I would happily guess with no 'don't know' option, but not with one – you can call this 'irrational', but it seems analogous for people who consider many situations money-maximisation games and think moral people are acting irrationally, but many would not consider that laudable.
I guess I see this as more thoroughly fleshing out the rationalization. You're making fine points and it prevents us from proving that loss aversion is the reason for the failure to optimize for points. But, well, to quote a key paragraph from Scott's original post:
<blockquote>
I had people tell me there must be some flaw in my math. I had people tell me that math doesn't always map to the real world. I had people tell me that no, I didn't understand, they *really* didn't have any *idea* of the answer to that one question. I had people tell me they were so baffled by the test that they expected to consistently get significantly more than fifty percent of the (true or false!) questions they guessed on wrong. I had people tell me that although yes, in on the average they would do better, there was always the possibility that by chance alone they would get all thirty of the questions they guessed on wrong and end up at a huge disadvantage.
</blockquote>
I guess it seems clear enough that it was the fear of losing points that was distorting their reasoning. Especially if Scott is right that the students didn't change their strategy *at all* in light of the proof of the optimality of guessing. If it were scruples at play, they'd be like "ok, as long as I have some inkling, I'll guess; but it feels dishonest to not choose 'I don't know' when I really have no clue".
I was thinking this too. If someone told me directly 'Guessing on questions you don't know the answers to definitely has a better change of giving you higher marks' then I might be tempted to guess, but otherwise I'd feel a bit slimy even trying to do that calculation (to figure out if it was worth it to guess). And maybe Scott's friends are more scrupulous than I am.
1. A nonlinear utility function for wealth is often perfectly rational. The mathematical solution to how to choose the sizes of positive-expectation bets so as to grow your bankroll as quickly as possible in the long run, is called the Kelly criterion. It is based on a logarithmic utility function. You choose bets so as to maximize E(log(wealth)).
2. I think the answer to the George Floyd question is the viral video. If instead of a video it was a text summary in some local Minneapolis newspaper, we would probably not have heard of George Floyd.
3. My AP physics teacher was adamant that there is no such thing as centrifugal force. Instead there is centripetal force acting against inertia.
(I would gladly trade 10% of my physics/chemistry/math ability for 10% of Scott's writing ability)
The George-vs-Mary question is like asking why some gofundmes are successful and why some aren't, which mostly seems to be a question of "do you have a social network that's willing and able to fight for you and recruit more help (which will in turn recruit more help, and so on), or don't you?" A strong cause can help expand the social graph, but it is neither necessary nor sufficient (even life-or-death gofundmes have been known to fail).
That's a plausible claim, though I think the two Kickstarters I know of which did extremely well (the Reaper figurines and the Coyote and Crow game) had clever incentives in the first case and an usually attractive product in the second.
The one I was close to that failed (Terra Memorialis, which was for a business to set up 3D online memorial rooms) had a weird product which was hard to describe briefly. Not that hard, but it's not like people already knowing a lot about gaming.
Still, you may well be right about typical GoFundMes and Kickstarters.
"Whoever decided on that grocery gift card scheme was nudging, whether or not they have an economics degree - and apparently they were pretty good at it. "
I disagree! They weren't nudging at all! They were applying standard neoclassical economics to pay someone to do something they wouldn't otherwise do.
Thaler and Sunstein describe a nudge this way (emphasis added):
A nudge, as we will use the term, is any aspect of the choice architecture that alters people's behavior in a predictable way *without* forbidding any options or *significantly changing their economic incentives.*
Paying people (with cash or groceries) to get vaccinated is significantly changing their economic incentives.
And in fact, one standard behavioural economic concept (crowding out intrinsic motivation) might suggest that one shouldn't pay for vaccinations, as then you crowd out the intrinsic motivation (we owe other citizens a duty to be vaccinated) with cash/groceries.
This mistake is made all the time - people describe standard economic analysis as behavioural economics, or "nudges".
I'll give you some examples in vaccination that I think *would* qualify as nudges:
- instead of paying 10 000 people $10 each to get vaccinated, run a lottery and pay one person drawn at random $100 000
- Instead of asking people to opt in to a vaccine, ask them to opt out.
- send them a letter telling them how many people on their street have been vaccinated (if it's a lot. If it's not... maybe don't).
Even then I don’t think “nudge” has anything meaningful about it because “people really like lotteries” and “public shaming” probably aren’t usefully understood as having any sort of mechanisms in common
The problem is a confusion between the meaning of "nudge," which is a small push, and the way it is used in the book _Nudges_, which is the use of choice architecture to affect individual choices. Only the latter involves behavioral economics.
Regarding Galenter and Pliner, what they focus on is the *exponent* of the curves, and they find that the *exponent* of losses is larger than the *exponent* of gains. Scott summarizes this as "loss aversion", but this is not what it means.
What does an exponent tell you? Let us take exponent 1 versus 2 for concreteness. So the curve for gains looks rather like f(x) = const*x, while the curve for losses looks rather like g(x) = const2 * x^2.
Loss aversion would tell you that f(x) < g(x). But this is a completely different question, and the exponents don't tell us this. If you want to know whether f(1$) < g(1$), then the thing that matters are the two constants const and const2. This is exactly the thing that the exponent analysis tried hard to remove from the picture, because the study wanted to know the exponents!
What exponents do tell us (if the results are scalable to a wide range), it is that we have f(x) < g(x) for *very large* values of x, because a quadratic curve grows faster than a linear one, and at some point the constants will not matter anymore.
Going back to Hreha, he claims
"...the early studies of utility functions have shown that while very large losses are overweighted, smaller losses are often not."
I don't know whether this claim is true or not, but it is absolutely compatible with the exponents found in G&P. In fact, Hreha's claim *requires* that the exponent of losses is larger than the one for gains. (Even more, if the curves were truly of the form const * x and const2 * x^2, then it would imply Hreha's claim, because the function const2 * x^2 is smaller than const * x for very small positive x. But this would require that the fit is accurate for a specific narrow range of x, and that's probably not what the fit was optimized for.)
I buy Scott's analysis overall. The above is a subtle point, which probably got lost at some stage of iterated citations, and apparently it was not important for Kahnemann and Tverski anyway. But in this detail, Scott's analysis is wrong.
Yes, absolutely. Gains have *de*creasing marginal utility, while losses have *in*creasing marginal utility (as I lose more and more, every additional dollar becomes more painful). So it makes sense that gains have a small exponent (smaller than one), while losses have large exponents (larger than one).
But that is all compatible with Hreha's claim and with the G&P paper. While Scott was arguing that the G&P paper would contradict Hreha's claims, and was confused why Hreha cites it as support.
For someone so determined to expose loss aversion as pseudo-science, Hreha was coming across as kind of anti-science. I couldn't quite put the finger on how until I got to
"In my experience, creative solutions that are tailor-made for the situation at hand *always* perform better than generic solutions based on one study or another."
Which, sure: common-sense N=1 ideas that you can't really test but "come on we all know it works" may be the right strategy sometimes (something something seeing like a state). But it's not exactly a moral high ground to demand extreme rigor on others, specially when at least they are trying.
Great review; thanks. My experience (and decent evidence in the literature) suggest that specific BE strategies can be very effective when there is a gap between intention and behavior. For example, people may *intend* to save for retirement but never get around to doing it. In such situations, switching from an opt-in to an opt-out approach has been proven to activate latent demand for such savings. Active choice -- stopping people in the middle a process and requiring them to state their preference -- can also be effective. (My team used this in a healthcare setting to substantial effect, helping increase the number of patients receiving medications for chronic conditions via home delivery.)
One challenge with these two strategies is that there is no free lunch; unlocking latent demand requires a lot of backend rework to make things easy and automatic for the consumer. In addition, they are counterproductive if there is no latent demand; you're just creating friction for your customers.
But all of this is to say that some elements of BE / choice architecture are alive and well, and their effectiveness is not easily explained by classical economic theory.
“Behavioral economics” as a set of mysteries that need to be explained is as real as it ever was. You didn’t need Kahneman and Tversky to tell you that people sometimes make irrational decisions, and you don’t need me to tell you that people making irrational decisions hasn’t “failed to replicate”.
To be even more precise, is that people are irrational in somewhat predictable ways. If they were just irrational, you'd expect as many behaviours/responses as there are people/possibilities but, in many experiments/problems, that's not what we're seeing. People make sub optimal/irrational decisions in a way we can predict/exploit...
To whatever extent these folk stock market trends are real, which most aren’t IMO (lol technical analysis), aren’t they driven by transient characteristics of the underlying market or self perpetuating ideas driving buying patterns, and not “cognitive biases” that are at all consistent or predictable? Which is what I think is true of all of this - “irrational” decisions are just decisions, people are dumb sometimes, but they’ll correct over time sometimes and their mistakes are closely and arbitrarily driven by the problems details as opposed to general rules (even optical illusions are overcome instantly and don’t actually trick people in significant ways for more than a minute!!)
I happen to agree on technical analysis, though simple stuff (support, resistance, Fibonacci retracement) is often "correct" (self-fulfilling) but, if you want to see a more complex example of biais - trend following is an investment strategy (or family of strategies) that work reasonably well over time and shouldn't be possible at all if the rational random walk was an accurate description of markets.
You have tons of these behavioural issues in finance. Loss aversion is pretty real in that realm. Herding mentality (i.e. chasing something b/c everyone is doing it) is also common, though I kind of see that as "rational" given career risks.
I really don't think these issues are idiosyncratic or corrected over time. They're systematic, that's why you have successful CTAs.
They are idiosyncratic and corrected over time over a multi decade time scale and when they’re idiosyncratic to the entire US stock market, but not generally true for all actions taken by all people. But for a different type of market, for different people using it, different trading software, different underlying assets, there’d probably be different issues. The claim is that there’s some “systemic bias” that makes people do something, which is wrong - and people will stop doing the “biased thing” when that situation changes - people will stop tipping when not tipping isn’t seen as shameful or when too many companies use the tipping increase strategy, and that “bias” is more of a culturally and tactically local thing than it is anything about human nature. People trend follow because it works - but why does it work? Sure the market isn’t a random rational walk or w/e, but that’s not a cognitive bias, it’s a property of the way the markets and companies work and reveal information and grow.
Is it possible that, as people become aware of behavioral economic findings, they adjust their behavior and subsequent studies have a harder time replicating the originals?
I feel like a bit of a broken record always talking about COVID on this blog (I have other interests, I swear!), but this part seems disagreeable:
> Nobody believes studies anymore, which is fair .... There are about 90 million eligible Americans who haven’t gotten their COVID vaccine, and although some of them are hard-core conspiracy theorists, others are just lazy or nervous or feel safe already.
"lazy or nervous or feel safe already" is not the least charitable way to describe them but it seems pretty close. How about this rephrasing:
"There are about 90 million eligible Americans who haven't gotten their COVID vaccine, and although some of them believe that scientists are frequently and deliberately collaborating with each other to generate fake consensus about untrue claims, others just don't believe the studies that support the vaccines are reliable due to the normal array of scientific errors and biases."
Really, the bloody minded focus on "nudging" people into taking vaccines (more like shoving) is one of the best ways to create resistance to a proposal. The moment a government starts "nudging" people, you're basically taking a position that you're more rational than them and that no argument, regardless of how well phrased or debated, can possibly get the masses to do what is best for them, because they are stupid. But people don't like it when government officials imply the citizens are stupid and the officials are enlightened, partly because both world history and the present day have cases where that idea got taken too far, leading to lots of people ending up dead or in camps.
Governments should just be focusing on providing as much high quality, trustworthy data about vaccines as possible and then just leaving it there for people to study, poke at, and pick up on their own initiative (or not). Instead a few of them are openly talking about building various kinds of inside-out open air prisons in which ordinary doors and walls act as the fences, or even putting the unvaccinated under perma-lockdowns (i.e. house arrest). This simply says, "we can't win the arguments on their merits so we have to force you to comply", which in turn makes whatever they want come across as much more dangerous.
Nudging is bad, but the studies seem pretty good and also if the vaccines had problems wouldn’t we have noticed given 100Ms have taken them? Yeah people should rationally argue the issue, but that doesn’t seem to work too well for the holdouts (nudging doesn’t either though so idk)
Noticed how? We only really have three ways to notice things like this:
1. The media.
2. Government statistics.
3. Personal anecdotes.
None are especially reliable at the moment because too many people have bought into the belief that everyone has to take the vaccine no matter what, it's been moralized, become a matter of ideology, that undermining the rollout makes you a bad person etc.
Consider: the media is by and large refusing to report on vaccine deaths, e.g. they'll report a death of someone slightly famous but fail to mention that it was a heart attack in a healthy guy two weeks or less after they took the vaccine. I've seen several cases of this now. They would rather omit highly interesting detail that makes the story much more relevant, than do anything that might reduce uptake.
Government statistics: there's really only one dataset gathered on this, the databases governments run that collect reports. For example you can see a better rendering of the US one at openvaers.org. Those all show massive numbers of side effects and deaths, far more than any other vaccine programme in history, and are certainly under-counting by a lot because people are being told to expect nasty side effects. For instance nobody I know has bothered to report when the vaccine makes them feel really sick for a day or two, because that's a "known" side effect. I've even heard of doctors telling patients to expect Bell's Palsy! And when people do seem to be injured by the vaccine it's apparently quite common for doctors to tell them that it's all just a coincidence.
Personal anecdotes: you can't share them, it all gets erased from the internet as fast as possible, at least from US sites. Still, people screenshot them and there's a collection piling up here https://t.me/s/covidvaccineinjuries/ - I have no idea how credible those are, but there it is.
That leaves the studies. That's "Nobody believes studies anymore". You can say a medical RCT is a totally different level of study to most of the ones that get revealed to be unreliable, and indeed it is. But there are nonetheless doctors who assert that drug trials don't seem to reliably detect side effects. Sebastian Rushworth has written about this here:
The summary is that the effect seems to be real but nobody knows exactly why trials don't seem to correlate that well with doctor's real world experience. One is probably exclusions. COVID trials excluded lots of people who are now being given it e.g. the very elderly, pregnant women. HIV+ patients were included and then excluded at the end in one trial without explanation, I think it was Pfizer. In some trials (of statins) it's alleged that the trials use a "run-in" period which excludes people who would have bad side effects and other things you'd think wouldn't be allowed.
So, I guess it's easy to assume that the system is working, is rational, scientifically based and is tuned to look for problems in the same way it would be in normal times. Problem is, I see no evidence of this. Instead what we see is mass hysteria on an a-historic scale in which vaccination has been defined as a quasi-religious issue. Look at this very blog post! It states that people not trusting scientific claims is fair, and then paints the unvaccinated as lazy scaredy cat conspiracy theorists. This is a topic on which even rationalists find it hard to be rational.
Ignoring “the media” ... “government statistics”, I e medical monitoring, are actually pretty good in the US - we regularly catch one in a million drug side effects and add warnings to them based on that. And we *did* catch few in million heart abnormality side effects for some vaccines! But didn’t any lather ones. It’s very good - the Scott over cautious FDA is very cautious, in many areas! VAERS is another thing - self reported cases for those with knowledge to look at - and self reported side effects are unreliable and random and might be increased by a sudden interest in vaccine side effects among a lot of people combined with them spamming the site all over social media.
Yes, the vaccine making you feel sick for a day or two is an intended side effect that all vaccines have. You being sick is body responding to the foreign vaccine proteins and cell death, which is intentional because it’s how the immune response can work and prepare for the actual virus! expecting Bell’s palsy - such side effects are common *among* most vaccine at very very low frequencies - individuals shouldn’t expect it, but it may happen. But at substantially loser rates in all cohorts than autoimmune from the virus itself lol. So that’s a bit of a misstatement I think. And the influencer who appeared to have guillian barre was I vaguely recall found fake. I’m not sure what sort of injuries are being reported?
Yeah. The original trials, with 40k people each, won’t find 5 in a million side effects. Nobody disagrees. The side effects comes when the drug is administered to millions of people who then have five with some side effect, at which point postmarketing surveillance regularly finds side effects, like it did for AZ JJ and thousands of drugs. And that does seem to work pretty well! Same for exclusions - yes, the trials excluded them. But if it’s administered, I don’t see why it wouldn’t be.
Because the blog has lots of scientists who read it who read all these papers and think “yeah the vaccine stuff checks out” and “oh boy social psychology does not lmao”
And FWIW I do suspect some vaccines have long term negative subtle weird effects because of the lipids or whatever. Thimerosal probably is bad. The microscopic dose of micro plastics and plastic chemical additives and synthesis side products is probably bad too. But that’s all quite small scale. The risk benefit of vaccines, which help prevent a LOT of disability and death, is rather clearly positive by really most standards. If you’re worried about plastic, all vaccines probably has 1000 or 1000000 times less than that in your food and the air you breathe, same for any other pill or food you eat for synthetic chemicals and side products. That’s compared to a x% chance of just dying, which is quite bad.
Well, rather than dive into a point-by-point back and forth, let me step back and re-focus on the meta-issue.
What you're saying here is basically, some studies are terrible but these studies are pretty great and that's why all this is fine. My point is that so-called "anti vaxxers" are not actually lazy or whatever, but rather that they're doing Bayesian reasoning with very different weights and categorizations attached to terms like "study". They don't trust scientific institutions or public health bodies anymore, and if you don't trust these bodies then it is rational to reject their advice, given that the only real evidence you have that it's a good idea is their own studies, yet your prior on scientific claims by these people being honest/true is very low. For example, because COVID has been a nudge train from the start, with Dr Fauci telling the NYT at least twice now that he deliberately lied about masks and vaccine herd immunity thresholds in order to nudge or manipulate people's behavior.
That's why trying to explain resistance to COVID vaccines as anything other than an expected outcome of previous institutional actions is a non-starter for me. I suppose you could potentially encompass non-trusting people under "feel safe already", but people who don't trust studies anymore seem by far the biggest contributor, and it deserves to be stated explicitly.
You can just download the Pfizer and moderna studies and read them and look at the graphs. They are “their own” studies only if “they” is “every scientist at every university”, and Pakistan and Israel and South Africa and Cuba have their own safety and efficacy studies if you entirely distrust the US. There’s nothing Bayesian here - if the Pfizer study had a sample size of 50-500 and was based entirely on a written survey, I would dislike it too. And if social psych studies were instead n=50k, had direct measurements, had directly relevant and usable measurements and experiments, and observed effects with r^2 = .99, p<.0001, posterior Bayesian analysis p>.999999, and were replicated many times, I’d like social psychology a lot! The CDC and WHO are not Pfizer, and additionally many of there people trust the CDC and FDA and Pfizer and healthcare as a whole to treat their injuries, do yearly checkups, give their parents pills, and develop Jews ones. So I don’t see how some health bodies being retarded means that other ones are totally untrustworthy when you can just look at the studies and see they’re strong. Fauci and NYT may have lied or whatever, but the evidence is directly available on vaccines from many countries and studies. And I don’t think any of them are using a bayes theorem (and nor should they ... just read some of the evidence),
“I don’t trust studies anymore” is ridiculous, you do trust studies - trust them to ensure your tap water doesn’t have too much chlorine, trust them that your antibiotic actually kills bacteria, trust them on the safety measurements of the car you drive, trust them that the toothbrush and toothpaste you use prevents tooth decay and cavities... and again you can just look at the evidence here, even if you don’t trust studies you can observe that half the country is vaccinated and use a non self reported source of information about population health whether anecdotal or internet statistics and you’ll see stuff.
These people don't trust academics any more than they trust big pharma or big government. They have the more old-fashioned (conservative, little c) approach of trusting personal experience and personal relationships. They don't trust the tap water studies, they trust their experience with having drunk the tap water where they live since they were children. They don't trust the studies on the antibiotic, they trust their doctor, whom they have a personal relationship with, and even him only if they are already sick. They don't trust studies for tooth-brushing, they trust their parents who made them do it from a young age. They don't trust car safety studies at all - many of them are the same people who oppose helmet and seat belt mandates and think airbags will make them lose control of the car during the crash. They aren't scientific people, and make decisions on non-scientific (usually personal relationship-based) grounds.
On vaccines they trust people they trust around them. In conservative places, these are conservative people who in turn trust other conservative people who all don't trust this socialist medicine vaccine. (In liberal places it is the same way, hence hippie anti-vaxers who think the MMR vaccine will poison their kids.)
"or feel safe already, and have good reason to." Such evidence as we have suggests that someone who has had Covid is at least as safe as someone who has been vaccinated. Vaccination in addition may make him safer, but not by much.
Further, for non-elderly people with no special risk factors, the chance of dying if infected is about one in ten thousand, so if they think the chance of getting infected is one in ten, vaccination reduces the risk by about one in a hundred thousand, reducing life expectancy by about twenty seconds. Choosing not to get vaccinated is not, under those circumstances, obviously irrational.
I've just put up a blog post with a more detailed analysis. My one in ten thousand figure is correct for a 25 year old according to one source I found, low for older non-elderly.
As a non-US citizen, I am baffled by the usage of tipping as an example for a "rational economic actor". I get the point you're trying to make about nudging, but tipping is much more of a cultural custom than anything that is done for economic purpose. Customer service still exists in the countries where people don't tip, after all.
To me it seems that US citizens tip for the same reason Russians remove shoes when entering the house, or Swiss shake both men and women's hands when greeting - it just feels weird not to. Case in point, Americans tip abroad too, when there's absolutely no incentive to do that.
What if the barrier between economic purpose and cultural custom is porous and indiscernible? And what if that helps undermine the concept of economic irrationality bias as a whole? How does one explain financial system failures or buying bad bonds and bubbles - bias? More “””””culture”””” - just the different complex characteristics of the financial system that are part of itself and can be understood only as the financial system itself is.
> the same reason Russians remove shoes when entering the house... - it just feels weird not to
That's mostly a consequence of the infamously shitty climate and Russians not being rich enough to always instantly get in a car every time they exit a building. Preferring not to have wet dirt on your floor everywhere isn't an arbitrary whim.
> I knew all this, but it was still really hard to guess. I did it, but I had to fight my natural inclinations. And when I talked about this with friends - smart people, the sort of people who got into medical school! - none of them guessed, and no matter how much I argued with them they refused to start. The average medical student would sell their soul for 7.5% higher grades on standardized tests - but this was a step too far.
A great microcosm of why behavior Econ is bad IMO. I had tests like this, except for math competitions, and everyone guessed. It’s all very local and specific and contingent in ways that behavior economics’ methods are neither equipped nor desiring to measure.
> Nobody believes studies anymore, which is fair. I trust in a salvageable core of behavioral economics and “nudgenomics” because I can feel in my bones that they’re true for me and the people around me.
Psychoanalysis, behaviorism, Christianity, faith healing, homeopathy, chiropracty, new age cults, hypnosis, etc. again, it’s all local and depends on so many different things. But it’s not generalizable at all in the way they imply - a different person who had learned different things in childhood (and think learning in the sense of learning math or sociability, not “u were too nice to child so he is narcissist” type psychoanalysis nonsense popular a hundred years ago) or just was in a different situation (poor person who grew up on farm taking the survey who really cares about doing what he’s supposed to to succeed vs rich kid who grew up in school and has learned to just daze through tests he takes and just wants to get over with the survey quickly). It’s totally possible for a population or cultural happening-local phenomenon to be true, but also only true in the sense that people choose to do that because they were taught it’s rude not to tip 20% or something and if they were told not to they wouldn’t, and that seems very different from the sort of claim that I sense from the field.
> Galenter and Pliner
Based on demonst’s thing, it seems like they didn’t find instantaneous loss aversion, but large scale loss aversion, which is much more “diminishing marginal utility” style? dunno
> not rioting at systemic racism
Needing a scapegoat or martyr to riot is *very different* than caring only about a scapegoat or martyr. Millions of people care very deeply about systemic racism and blacks people in America - correctly or not - and they did before Floyd. The dynamics there are not at all well described by a general “population or individual”. Individual cases are easier to *prove*, as we saw with the video that made it blow up - literal thousands of other cases of individuals did not gain traction because they didn’t have good videos or evidence despite being individuals. And plenty of other population phenomena cause protests!
On the Identifiable Victim Effect: for many things there are lots of examples of individual victims, but only a few go viral.
A good example here is people unable to afford healthcare using crowdfunding sites. Some people are able to raise tens or hundreds of thousands. Many more get a few hundred bucks and that's it. But a lot more donations are made to crowdfunding than to healthcare charities that provide care for people who can't afford it (as distinct from other types of healthcare charity, like research).
Perhaps if you had a thousand different stories about a thousand different single mothers written by a thousand different people, three of them would raise a lot more money than the generic story about single mothers in the abstract, and the other 997 would not show an identifiable victim effect.
There were a bunch of proto-George Floyds, like Tamir Rice and Eric Garner, who went viral but less so. There were lots of local ones who went even less viral than that. Whatever it was about Floyd that got his story to go viral in a way that the others didn't, that's the Individual Victim Effect. Perhaps it should be called the Sympathetic Individual Victim Effect or something?
"you value something you already have more than something you don’t."
Really sounds nitpicky vs "loss aversion" - wouldn't they produce the same behavioral outcome precisely 100% of the time? Can anyone delineate Loss Aversion and Endowment Effects?
I think the Endowment Effect should be unrelated to financial value. Loss Aversion would predict that it hurts more to lose something that is worth 100$ than 1$. It would predict that you stop caring for losses of value zero. But the Endowment Effect would still let you prefer the sticker in your hand over the potential other sticker, even though the sticker is completely worthless.
So there are situations where the two are not the same. But I agree that the overlap is pretty big, and in most situations it's probably hard to see a difference.
Loss aversion and endowment effect are totally different. The canonical example of the endowment effect is where the researcher offers to sell the control group a mug and they offer an average of $3, but when they give the intervention group a mug for free, they demand an average of $5 to sell the mug back to the researchers.
This can't be expressed in terms of loss aversion, because there is no loss. It's just a question of how much they value the mug. When they were buying it, they were only willing to pay $3, but once it was given to them, they were willing to walk away rather than sell it for $4.
I haven't really dug into the studies, so I don't know that this really demonstrates the endowment effect. Maybe this was just strategic, and they were trying to get as much money as possible. But that's what it is in theory, anyway.
I think I may have an incorrect interpretation of loss aversion - perhaps that's why I'm conflating loss aversion and endowment effect. I always thought of it as "people are more sensitive to costs than they are to benefits" Here's how I see your canonical example:
The person is averse to 'losing' the cup, so they require more money to sell it.
The person is averse to 'losing' their money, so they will give less for the a cup that they don't own.
I have no formal education in behavioural science - am I over-interpreting 'loss aversion'?
The endowment effect can be viewed as an irrationally increased value placed on the feeling of owning something. It's like a "but it's miiiine" bias. Not that valuing anything -- even (especially) vague feelings -- is irrational. But it's irrational if it's like an addiction that you don't *want* to want. Imagine you'd pay up to $100 for a snozzwobbit, meaning your value for it -- including the warm fuzzy feeling of owning it -- is $100. That means losing the snozzwobbit -- and the concomitant warm fuzziness -- should drop your utility by that same $100. If not, then something fishy has happened.
(Of course there are a million ways to rationalize such asymmetry. Maybe you didn't know how much you'd like the thing until you experienced it, or you just don't want to deal with transaction costs or risk of getting scammed. But if we control for all those things -- and many studies have tried to do so -- we can call the asymmetry the endowment effect.)
Loss aversion is a bit more general. Technically it means having an asymmetric utility function around an arbitrary reference point, where you think of a decrease from that reference point as a loss. Again, the weirdness of a utility function isn't itself irrational -- you like what you like. But the arbitrariness of the reference point can yield inconsistencies which are irrational. You can reframe gains/losses with a different reference point and people's utility function will totally change.
Consider the Allais paradox. Imagine a choice between (a) a certain million dollars and (b) a probable million dollars, a possible five million dollars, and tiny chance of zero dollars. People mostly prefer the certainty, which is entirely unobjectionable. Now imagine a choice between (a) a probable zero dollars and possible million dollars vs (b) a slightly more probable zero dollars and slightly less possible five million dollars. Now people feel like they might as well go for the five million. And -- here's the paradox -- you can put numbers on those "probables" and "possibles" such that the choices are inconsistent. Rationally, either five million is sufficiently better than one million to be worth a bigger risk of getting nothing, or it's not. In the first choice you can guarantee yourself a million dollars and you don't want to risk losing that. In the second choice there's no guarantee, only bigger or smaller gains.
Thus is your reasoning distorted.
If you're talking about your utility for snozzwobbits then the obvious reference point is the number of snozzwobbits you currently own. If your utility for an additional snozzwobbit is much less than your disutility for giving up one of your snozzwobbits, that's suspicious. Still not inherently irrational; maybe you have just the number of snozzwobbits you need and one more would be superfluous. But if we see that same asymmetry -- how much you'd pay for an additional snozzwobbit vs how much you'd sell one for -- regardless of how many snozzwobbits you own, that's irrational.
So there you have it. The endowment effect is a kind of loss aversion where your arbitrary reference point -- as in your value for snozzwobbits -- is however many you currently own. And the Allais paradox example shows that literal endowment/ownership isn't required for this cognitive bias to appear.
Thank you for this! The difference is definitely very subtle. If I've understood your explanation correctly, the endowment effect predicts everything loss aversion would predict in the cases where it's applicable, however loss aversion actually covers a larger set of issues where people perceive a potential 'loss', which does not necessarily require anything to be owned.
"All subjects were entered into a raffle to win a gift certificate for participating in the study, and they were offered the opportunity to choose to donate some of it to single mothers. Subjects who saw Ad B didn’t donate any more of their gift certificate than those who saw Ad A. This is a good study. I’m mildly surprised by the result, but I don’t see anything wrong with it."
I wonder about this. I see a lot of ads in magazines etc. which do this exact thing - "Here is John, who is living rough on the streets for three years since his abusive stepfather threw him out of the family home at the age of fourteen". Usually there's a small-print disclaimer about "Photo is of actor, not of real homeless person" but you can generally *tell* that this is indeed an actor pretending to be the real person, the same way that radio ads where it's purportedly Mary and Sheila talking about this great new furniture store that just opened, you know it's two actresses and not real customers.
So maybe that has an effect - when you can tell this is "Actor playing a part" it doesn't hit you like "this really is a kid sleeping rough" where you would see them in a news story or documentary.
I think it being in a raffle also had an effect; this wasn't people choosing to make a donation based on General Ad A or Personalised Ad B, this was people being asked to give up part of a prize. I think in that case people are making decisions based on "how much do I think is reasonable to give, out of the prize I won?" rather than "will I give my donation based on how hard my heart-strings were tugged?"
(I hate heart-strings tugging campaigns because I *know* they are trying to emotionally manipulate me, and this annoys me so much I deliberately *won't* donate to such efforts).
I think there might be a general principle here that applies to most of behavioral economics. People, in general, have some level of self-awareness and capacity to learn. They also tend not to enjoy being manipulated. This can lead to what I might term "hardening" against manipulation attempts, where people gain the ability to recognize when they are being manipulated and develop a strong aversive reaction to it.
I think that many advertising tricks motivated by behavioral economics may end up "hardening" the general population to the same tricks that they employ. I wonder to what extent gains realized by behavioral economics represent lasting increases in the efficacy of manipulation, and to what extent will be offset by "hardening" of the general population against these techniques.
Without knowing much about the details of 'behavioral economics' it seems to me the criticism of the detractors is largely that while some effects can be shown in studies, it is not nearly as neat and generalizable a concepts such as 'loss aversion' would suggest. A lot of it is either fairly common sense, or rather very complicated effects, but giving it the branding of 'behavioral economics' and reducing it to a couple of basic tenets is a marketing strategy of Kahnemann etc. rather than scholarly ingenuous.
I may have read about it in the book "Nudge" but one example that stood out for me was for new employees and 401K plans. Although it's a great idea to sign up many don't because of a lack of understanding and a menu of confusing "investment choices" which almost nobody wants to learn about and have to pick from. By making it opt-out and offering a choice to "just take the default most recommended investment allocation" the participation goes way up. These are not the precise details but in these kinds of cases, I can see where a "nudge" can make an out-sized impact.
To me, arguing whether loss aversion *or* status quo effects are real is a bit like arguing whether the world is made up of centimers or inches. They're just different representations of the world, describing large chunks of the same territory through slightly different lenses. Maybe one of them will prove slightly more useful than the other? Anyway, I agree with Scott that this is a far cry from proving loss aversion wrong.
Did you read Gal and Rucker? I thought they did good work doing experiments designed to prompt loss aversion but not status quo effects, or vice versa.
Is this the same Jason Hreha who founded the Walmart Behavioral lab - the first Fortune 50 behavioral economics team? Who made and sold a startup, is in Stanford, and is listed along with Dan Ariely in a behavioral scientist web site as co-authoring an article?
If so, this is more interesting since it is Hreha repudiating his own considerable economic success.
As I understand it there is no career/professional value in producing a study that says, "I attempted to prove that the conventional wisdom X is false. After 18 months of study, reams or data and careful analysis I've come to the inescapable conclusion that the conventional wisdom is in fact correct."
The idea of risk seems a bit absent here? It that how it really is in this field?
Investment banking acts to maximize expectation, with low levels of risk aversion, and we have global financial collapse every time regulatory restrictions are relaxed a bit to allow them to make more money.
Black Swan or Skin in the Game are probably good books on this, but the claim Taleb makes is that we are woefully bad at predicting risk; and that actually what are measuring when we can measure risk aversion is the level to which economics is struggling to understand risk and payoff, not people in the street who generally do a better job of keeping their affairs in order than businesses employing lots of economics/financial/probability experts that constantly need government bail-outs.
Wilmott (the quant, with a journal of the same name) argues quite persuasively in various places that even our current idea of "correlation" as defined in probability theory, is worse than useless for understanding risk and payoff.
None of the global financial colapses (since at least the great depression) were preceded by relaxation of regulatory restrictions. Most have been preceeded by complex regulation, then innovation in the financial market in the contexts of those regulations where those innovations carry systemic risks that noone (writ large) could recognize due to the complexity of the regulations and the innovations.
If you have a counter example, I would very much like to know, and let me know what, specificially, was the relaxation of regulation that you are thinking of.
1980 Depository Institutions Deregulation and Monetary Control Act (repealing some interest ceilings)
What pieces of complex regulation are you referring to? Stuff like the mandates to give more loans or sub-prime mortgages?
"no-one (writ large) could recognize due to the complexity of the regulations and the innovations" -- are you talking about CDOs? I thought people (like Wilmott, who has a journal and wrote textbooks) said CDOs were toxic for quite a long time before the crisis?
But it sounds like we agree that banks did not understand the risks they were undertaking; and that it was not isolated but across the board. And as far as I know, these guys employ THE experts in economics and probability.
I disagree that there's risks you can't recognise due to the complexity. Well sort of. Fine: you can't identify specific risks within the complexity -- but you know that with increased complexity you know there will be increased risk -- so sufficiently complex things you don't touch if you're risk averse. And being risk averse is just sensible.
So my point still stands: think of our best understanding of risk as a black box that says "too risky" or "fine"; and we have evidence that it has a strong habit of going "fine" when it really should say, "too risky". Positive risk aversion results are where we use this box with average people and when it says "fine" and people say "too risky" we say the box is correct because MATHs and the people are wrong because PEOPLE?
I don't think that any of those acts relaxed regulations when considered on net. They allowed some things to occur, made them really complex, and each time made the rules more ambiguous allowing regulators to make it up as they went. I'm not an expert either, but that's the take I get from people I know in compliance in the financial industry. Each one of those acts is over a hundred pages, spawns thousands of pages of regulations. I don't know if its true or not, but Milton Freedman was supposed to have said that NAFTA wasn' a free trade treaty, because a free trade treaty would be about a page long.
By writ large, I mean to imply that the "no-one" is used coloqually not precicely. Yes, there were people who saw it all coming. No one saw covid coming in the same way. Yes, we can point to the rationalists and the preppers show saw it coming, but society as a whole did not see either one coming.
The issue isn't complexity per-se. The issue is the type of complexity - its new, and its fiat. We deal with old, emergent complexity all the time, both in economic spheres and other spheres. Growing food and getting it onto people's plates is an incredibly complex task, but we've been developing the social know-how to deal with it since forever. A new complexity causes problems because there's no social know-how - suddenly implementing covid protocols caused problems in the food supply in the US.
A new reg can be simple or complex - and simple regs do exist. But most regs in the financial sphere are really complex.
And even that might not be the worst thing in the world. The fiat nature of the rules is the real kicker. Emergent rules may not be perfect (like at all), but they are the product of people trying to get shit done. They are almost always the product of people that are the most invested in getting the shit done, and they are done by the people with the most local knowledge. We may not like the rules that the interested parties would come up with on their own, but at least they would be as well informed as is possible.
What we get with financial regs are: Complex rules, put into place before anyone knows what their full effect will be, based on the design of people with less information than those being regulated.
That leads to people doing exactly what any cynic would expect - malicious compliance, second order effects that noone knows about, and regulated parties using their superior information to subvert the rules.
We do agree that banks did not understand the risk they were taking (again, writ large, there are banks that refrained from dealing with CDO's, but we know what you mean). In your last paragraph, I think you are saying that risk aversion is more rational (or maybe more beneficial?) than simple expected value calculations in lots of places even though lots of poeple say it not. I would agree with that if that is what you are saying.
I could have sworn that Thinking Fast and Slow talks about loss aversion in terms of a faster-than-linear curve (maybe y=x^2, or something like that). Maybe they didn't say "small losses don't really matter" in english, but if you draw the curve near zero, its apparent.
"Unfortunately, the findings rebutting their view of loss aversion were carefully omitted from their papers, and other findings that went against their model were misrepresented so that they would instead support their pet theory. In short: any data that didn't fit Prospect Theory was dismissed or distorted.
I don't know what you'd call this behavior... but it's not science."
You know, I'm reading Structures of Scientic Revolutions, spent over a decade in academia, and spent a decade in an industry turning science into products. This sounds exactly like science to me.
I actually didn't mean it to be. Science is a process, one that works really, really well at generating insight out of the work of flawed humans. It doesn't require angles, just some greedy humans willing to skewer their colleagues and rivals in front of an audience.
There's a cliche that 90% of human behavior involves giving, receiving, or bartering for attention. These three activities seem to me to correspond to production, consumption and exchange in economics. That is, exchange theory alone is probably an insufficient foundation for behavioral economics.
This is a brilliant defense of the field and I'm really grateful for it! Another thing that I believe to be reasonably unscathed by the replication crisis is research on present bias (aka akrasia) and commitment devices. Phew for us!
PS, not a correction to Scott's post per se but maybe a correction to an impression readers will likely have. I don't know if it actually matters but is interesting:
Hreha posted that article a full year ago. It reads as (and is) a perfectly apt reaction to the Ariely affair and I presume that when Hreha noticed it being circulated he just savvily removed the date from the article so people wouldn't be distracted by that or write it off as insufficiently timely. (I actually checked the internet archive and he made no other change besides removing the date.)
"Previous criticisms of loss aversion argue that most experiments are performed on undergrads, who are so poor that even small amounts of money might have unusual emotional meaning. Mrkva collects a sample of thousands of millionaires (!) and demonstrates that they show loss aversion for sums of money as small as $20."
The millionaires are interesting, but I suspect they aren't thinking about the real sums but instead about their relative terms. To me, all of these experimental propositions feel less like "Would you like to win a small sum of actual money?" and more like "Come up with a correct heuristic for comfortable gambling on sums significant to you."
Personally, I find it weirdly difficult to isolate a single instance of a favourable-odds gamble from the possibility of a ruinous, if quite unlikely, losing streak under the same odds.
Re: the Endowment effect, I don't think it's necessarily a cognitive bias as it is just a premium on information. In a world where people are occasionally swindled by others, the expected return on a trading your mug for another that someone else *claims* is identical is, in fact, negative - once you account for a >0% chance that they might be trying to pull a fast one. The researchers might know for a fact that the endowed coffee mug and the one they offer are the same, but that fact isn't available to the study participants, so it is rational for them to ask for a higher price to offset their risk of losing an apparently functional coffee mug. If the subjects are allowed 10 minutes to study and fidget with mug A, then given 10 minutes to study and fidget with mug B, and *then* asked which one they wanted, and they mostly choose the first one anyway, then that's weird and seems to fit the bill for a bias. Maybe those studies exist, does anyone know? To me it seems that what gets cited for examples of the endowment effect is (like in Kahneman, Knetsch and Thaler's study) where researchers take two things that should be at the point of price indifference on the open market, note the subject's preference for the one they know more about, and then wave their hands and say "loss aversion!"
This doesn’t have much to do with loss aversion or the main point of the post, but when you talked about choosing a tipping amount, it reminded me of a thing I did in high school. (I’m not sure if I came up with this thing myself or heard of it from somewhere else. For all I know, this is part of some famous psych study.). I would write the numbers 1 2 3 4 spaced out horizontally on a piece of paper. Then I would ask a random person (well, as random as I could conveniently manage from my high school) to choose one. On the back of paper, I had already written - why did you choose 3? Because it quickly became obvious that 3 was the overwhelmingly favorite choice. In my not-quite-random, n = 100ish experiment, 3 was chosen about 90% of the time, 2 was chosen about 10% of the time, and I don’t recall anyone choosing 1 or 4. I never let anyone participate that had witnessed anyone else making a choice, and I didn’t let them know the choice distribution before they made their own. I wonder if your choice of a tipping amount (3rd choice out of 4 ascending options) is almost the same thing, whatever that thing is. I never searched for an explanation, but it was a fun way to pass through the boredom of public schooling.
That makes sense to me, as I understand people. It's the same phenomenon that gets people to realize that the lottery is unlikely to come out to the seven numbers 1, 2, 3, 4, 5, 6, 7 where they don't see it if the numbers are 21, 56, 13, 45, 19, 5, 29. Both sets of numbers are exactly as likely as the other, but one feels more random and we expect random. If you ask people to pick among four numbers "randomly" then they are going to want to avoid the extremes. In this case the 1 and 4. I'm less certain on the difference between the 2 and 3, but intuitively I want to pick the 3 as well. Split the middle and round up?
Given that Loss Aversion and other behavioral economics theories are largely applied to sales and marketing applications, it seems that you could aggregate and genericize sales data from e-commerce platforms that compare a loss-aversion framing to another, more neutral framing. From a practical standpoint, we do this kind of testing all the time. So it seems like these A/B testing could be modified to provide alternate data points on this topic.
I understand that this method may not provide a perfect testing environment. I'm just wondering if gathering data from "real world" experiments would provide additional reference points.
Also, one has to wonder if companies like Google and Amazon already have reams of this kind of data/analysis that we don't have access to.
I used to work in retail (entry level starting in high school), and the company I worked for did sales just about every week. Sometime in the middle of the week the specific prices and items might shift around, but generally speaking almost everything was on sale for 40-60% off of the "suggested retail price" every week. A new CEO came in and got rid of the sales, and just started doing Walmart pricing - cheaper all the time. The prices were pretty much identical, and so were the items, but didn't involve coupons, waiting for a good sale, or anything else.
The customers hated it, and sales dropped like a rock. It turns out, they really liked getting a "good deal" on a higher priced item. They felt like the $100 price tag meant it was a qualify item, but the $40 sale price meant they were getting a steal. Just selling them a $40 item was a low quality item and no discount at all! I think it works for Walmart because people expect fairly low quality and believe the prices are unusually low. Experience shopping there seems to confirm both are true.
JC Penney tried this same thing and also undid it as it didn’t work. Sadly. But this really is specific to the many details of the modern retail environment, where much participation is already pretty dumb/“irrational” (lol fashion prices) so individual details being irrational is maybe unsurprising.
There's also an issue of traffic generation. If you create a need to come into the store to see what's on sale that day/week, then you will likely sell other non-sale items as well. Constant low prices may not harness this effect.
Could that be because they already had their niche audience: the "deal shoppers", while Wal-Mart had their niche audience: "want it cheap" and that neither way is particularly superior (or even maybe Wal-Mart's is superior based on it's track record) but changing mid-stream means you lose your audience you've worked to build?
Yes, that was my thinking as well. The clientele of your business doesn't like when you change your business. Nobody needs to shop at a particular store, so when you change it too much they move on. I was more interested in the specific mechanism, where they loved to see something marked up to an obviously too high initial cost, and then "reduced" the cost to a more normal level. For some reason a particular kind of person is willing to pay slightly higher-than-normal prices for something if they think it's a really steep discount from some much higher number.
The think with your george floyd anecdote is that even if the identifiable victime effect wasn't real, it doesn't mean people would *never* rally around an identifiable victim, its just that they aren't more likely to do so. So its entirely plausible that for other factors, or just randomness, people rallied around an identifiable victim in this case
I think you poisoned the well unintentionally with "statistics don't start riots" because now people are fixating on riots. Rioters were an infinitesimal proportion of people who were moved to care or act in some way by the recording of a single individual's personal experience, but were not moved to care or act in the same way by statistics. Those can't just be explained away by "well, people like to riot."
I think what you may be missing is the study design of contrasting just statistics with just an example of one single mother isn't comparable to something like a video of police beating up or killing a black person. The latter isn't happening in a vacuum. People being moved to do something after watching George Floyd isn't a confirming example of identifiable victim effect. Those people are still reacting to an aggregate of events; that is, they're reacting to statistics, or at least to their perception of statistics. The reaction is to the belief that this kind of behavior in police is pervasive and widespread. The one video is just a precipitating event. It didn't cause the response on its own. "Straw that broken the camel's back effect" is not a cognitive bias. It's just the way multifactor causation works.
The endowment effect is a nice example of the ambiguity of the claim that behavior is irrational. At first glance, it is irrational to value things you have more than identical things you don't have. But that pattern of values makes sense as a commitment strategy for enforcing property rights. If I am willing to fight hard to defend something I have and other people know that, then with luck people won't try to take things away from me and I won't have to fight for them. If I am willing to fight hard to take things other people have, I am likely to get into a lot of fights. Think of it as analogous to territorial behavior in animals.
Some behavior being irrational doesn't mean it was never rational at any point in the historical development of human cognition. Hormonal responses to food flooding you with a desire to chow down calorie dense foods presumably has a plausible evo-endocrinology explanation in terms of being an advantageous behavior in the savannah of 300000 BC, but it has become pathological in an environment of caloric abundance. Similarly, attachment to personal belongings that caused you to send strong signals to random strangers in the Hobbesian nightmare of proto-human pre-history that you would fight hard to protect them may have been advantageous then, but retaining the behavior today when we have walls and locked doors makes less sense.
That's all a cognitive bias is. It's a decision framework that works perfectly well in one context bleeding into the brain at large and taking over decisions in contexts in which it no longer works.
It was rational then and it arguably is rational now. It's a commitment strategy. Even in a modern society, actually protecting your ownership of things requires efforts by you, in some cases efforts that cost more than the thing is worth. Being committed to do it means it isn't in someone else's interest to try to take your things.
My article is largely on things that fit your pattern, that were rational in the environment we evolved in but no longer are, but this one may well still be rational.
I have a hypothesis on the identifiable victim effect question. The identifiable victim effect doesn't exist (at all). Rather people respond to archetypal stories and sometimes an archetypal story is easier to construct using identifiable characters. George Floyd is actually some kind of archetypal story about the powerful abusing the weak told though the modern medium of a smartphone camera, and perhaps the combination of plot, characters and medium (and performance) is effective in telling that archetypal story in a way that statistics about police violence cannot be.
This superficially sounds the same as Hreha's observation that "... creative solutions that are tailor-made for the situation at hand *always* perform better than generic solutions based on one study or another." but I don't think it is. For one thing this places finite and measurable bounds on definable characteristics and suggests a path to producing a fully testable hypothesis that describes a framework for engagement with issues through media. Such a hypothesis, if you were able to even partially validate it, would in turn suggest paths to moving the "creative arts" into the realms of social science.
I imagine that you could describe stories (including interventions through media etc) as n-dimensional positions in vector space and assess the effects they would have on viewers. Actually, now that I think about it, this is all we're doing with recommendations on netflix, youtube etc, except we want to measure actions and sentiments about things that occur off the platform and we want to test the idea that certain elements of presentation relate to the literary structure to produce those different outcomes....
This isn't my field, so feel free to correct my ignorance.
I've read Thaler's "Misbehaving", and the story it tells is of a discipline fixated on an axiomatic notion of rational actors (using a particular definition of rationality), to which behavioral economics, with their experimental, interdisciplinary, real-world approach was a necessary correction.
Even remembering all researchers overblow their work's impact and importance, and trying to be careful about accepting anything that confirms my priors, I see no reason to assume the story isn't roughly true. This would make behavioral economics an important step in the progress of scientific paradigms even if all of their specific theories turn out incorrect, simply by pointing in the right direction that would otherwise continue being ignored. (The first question I would ask of its critics to assess whether they're worthy of listening, then, is "what's your alternative?")
There is, of course, a much less charitable interpretation of the above, which is that behavioral economics constitute, to paraphrase Robert Skidelsky, "not any new insight, but technical prowess in making an old insight acceptable to the economics profession". This impression is exacerbated by the fact that practical applications pursued by their practitioners turn out to be some "nudges" on the margins, aiming to exploit the "irrationality" or lead the "irrational", "misbehaving" people towards a more "rational" outcome. Essentially, all its momentum comes from catching up with advances in other fields of social science, and adapting them in the way that left the entire discipline of economics, its goals and underlying assumptions intact. If one thinks economics is in a serious need of paradigm change, well, this clearly ain't it.
On yet another end, it still means the field of economics can now direct its funding into sound empirical science, which seems to benefit everyone. (I now see many psychologists and other social scientists cite and praise economists' research, which, given psychology's recently exposed thorny relationship with research standards, obviously.)
Regarding the medical test story it is not so clear to me whether it is as simple as all that.
Let's say we have N questions, modelled by N not necessarily iid discrete random variable X_i taking values in { 1, 0, -0.5 } and also modelled by Y_i taking values in { 1, -0.5 }, it may still be possible that E(\sum X_i) >= E(\sum Y_i)
Here's an attempt at a proof. Let's take just one question. We can represent probability distributions over the three choices as points on a 2-simplex, i.e. (p_1, p_2, p_3) such that \sum_i p_i=1 and p_i \in [0,1]. Their expectation given the scores {1, 0, -0.5} is just a linear map to R. Similarly, we can represent the probability distribution over just the two choices as the 1-simplex face of the above 2-complex. Now, for example, the uniform distribution is a point in the centre of the base of the 2-simplex. Now 0.25 is a regular value of the map so it's inverse image is a 1 dimensional submanifold of the 2-simplex (treated as a manifold with boundary) meaning there is a curve from the point (1/2, 1/2, 0) representing the uniform distribution over just T and F to the interior of the 2-simplex. Which means I can have a distribution (p_1, p_2, p_3) which gives the same expectation.
What is my intuition behind this? Say for a given question I know that I don't know the answer is True as much as I don't know the answer is False then it make sense to assign them equal probability. However if I am slightly less confident of one answer over the other -- this seems more realistic - while still being in the region of guesswork (for me) then the probabilities to be assigned will not be uniform. That is, the Bernoulli parameter associated with the question is itself not uniformly distributed. In fact the Bernoulli parameter for a question need not even follow a discrete distribution and in that case I have to further figure out how to compress it to one number.
"But also: there are several giant murals of George Floyd within walking distance of my house. It sure seems people cared a lot when George Floyd (and Trayvon Martin, and Michael Brown, and…) got victimized. There are lots of statistics, like “US police kill about 1000 people a year, and about 10 of those are black, unarmed, and not in the process of doing anything unsympathetic like charging at the cops”. But somehow those statistics don’t start riots, and George Floyd does. You can publish 100 studies showing how “the Identifiable Victim Effect fails to replicate”, but I will still be convinced that George Floyd’s death affected America more than aggregate statistics about police shootings."
I'm surprised this point didn't go in a different direction. I think with the notion of Identifiable Victim Effect there's a lot of context missing. And I suspect this may be true of a number of different biases. I think the George Floyd poster, and all its attendant implications, has to do with *stacking* - look at the parentheses: "(and Trayvon Martin, and Michael Brown, and…)"
The same point can be made re: marketing efforts based on nudging - the cheesiness of a particular marketing campaign is not only a function of what the nudging seeks to achieve but also the zeitgeist-based (rolls eyes...) context in which it functions. Which I guess I why music, and marketing, needs periodic reinvention.
The broader point here is I would love to see research looking at how biases operate contextually - how many publicly-adjudicated Identifiable Victims does it take to make for a population group to start exhibiting bounded rationality?
I have two basic questions about loss aversion and wondering if these are beside the point or have been addressed in the research:
1] The experiments you generally read about are with small sums of money, say $100. If a person has a total wealth of say a few 100k or more (you may include any discounted future earnings in there if you like), at the scale of $100 my utility function should be ~linear. So I should be risk neutral. So in order to establish loss aversion experimentally wouldn't you need to be dealing with sums that are actually material to the person?
2] It's all good to present a fictional bet in an experimental setting, but in the real world someone needs to take the other side of the bet. Say everyone's loss averse, one person's loss in another person's gain, how do you get to an equilibrium?
Behavioural economics is trying to solve all of economics by first solving all of psychology. If you think about it, this really is the logical end goal. The plan is to predict exactly what each individual is thinking and then make economic predictions factoring in each and every possible action (or at least the average) a human could make. This seems so stupid. How is this the cutting edge of economics? Why can't we just give people free healthcare?
This is Steve Sailer 101, sans golf course design references.
(BTW, Steve, I played Dunes Club a few weeks back - it is that good. They change the pins between 9s.)
Behavioral economics and social psychology tried to make iron rules of human behavior. Human behavior is constantly mutable. So iron rules do not exist.
>It sure seems people cared a lot when George Floyd (and Trayvon Martin, and Michael Brown, and…) got victimized. There are lots of statistics, like “US police kill about 1000 people a year, and about 10 of those are black, unarmed, and not in the process of doing anything unsympathetic like charging at the cops”. But somehow those statistics don’t start riots, and George Floyd does.
The ~1000 number is specifically for those fatally *shot* by police, not deaths from all causes. That might seem like a quibble, but between that and the 'police' qualifier the statistic is only capturing one of the three specific cases listed - and not the most salient one, either.
I'm uncertain how much I'd disagree with the point being made even if it was off by an order of magnitude, but it's a notable error when used for rhetorical flourish and IIRC Scott has made it before. Might see if I can dig up the precedent...
Hi Scott, I think you're generalizing too much from your own experience. When I faced multiple choice tests with negative markings I always calculated the optimal strategy for guessing beforehand and stuck to it. E.g., in physics/math GRE there were four choices (or five, I don't exactly remember how many) per question. If one were to guess completely at random then one would get zero or negative score in expectation. However, if one could eliminate even one choice then the expectation would become positive. So whenever I could eliminate one choice I answered it. I know many other people who did the same. Similarly for tipping, I calculate the amount of tip (10% or 20%, so it's an easy calculation) and add it separately. Behavioral economics can't account for such behavior.
Behavioral economics and social psychology are pretty adjacent subjects. Given that much of social psychology's findings have come under grave doubt I would be pretty wary of behavioral economics too.
I think so much of this (the actual argument AND the meta-argument) boils down to the transition
(no opinions) -> heuristics -> ideology.
This is a pattern one see everywhere.
You want to buy a car. If you're like most people, you just don't care. You had Toyota, it was good. Then a Ford, it was good. Now you can get a good deal on a Kia.
You want to buy a phone. Well you've used Macs and you like them. You had an iPod and you liked it. The heuristic "I like Apple stuff" seems to work for you. You can spend a month researching phones, or you can go with the heuristic.
But somehow (and I think this is a transition that has been GROSSLY UNDER-THEORIZED in social science) a heuristic can become an ideology. My heuristic (hah!) until I see evidence otherwise is that this is essentially a transition from "I like X" to "I hate not X".
The heuristicist is happy with his heuristic and couldn't care less what choices you make.
The ideologue finds it essential to defend every bad choice Apple makes, to attack every good decision Intel makes. This rapidly shades into hanging out with the Apple people to make fun of those stupid Windows people, on to "how can you go out with someone who doesn't just like x86 but who work for AMD???"
This is everywhere!
Someone uses the heuristic "white suburbs" as a way to solve the problem "I want a quiet neighborhood". All they care about is the quiet part. But those who see the entire world as ideological (along this particular dimension) cannot believe that someone just made a quick choice of this type of neighborhood for this type of reason -- clearly they MUST have been motivated by racism. After all, people are divided in Chevy people and Ford people, there's no such thing as a person who just doesn't give a fsck about their brand of car...
We start with the heuristic "wearing a mask is probably worth doing in spite of the hassle".
In some people this transmutes into "I HATE non-mask-wearers", and because no-one's willing to admit this, we get tortured excuses about "well if they don't wear masks it results in a worse experience for the rest of us". Perhaps true, but when the battle shifts to invermectin and their choice has ZERO influence on your future health, it's still all about hating the other.
This transition from heuristic to ideology seems very easy. Cases based on products are especially valuable for understanding because most of us both have some products for which we have all three relationships: we can't imagine that anyone especially cares about their brand of TV, while caring a lot about a brand of soda, and shunning people who listen to the wrong music.
This is often explained as tribalism, but I'm not sure which comes first. In a lot of cases to me it seems like the heuristic comes first, it transforms to ideology, and then a tribe is discovered. (Maybe that's the loner path, the tribal first path is more common? But on the internet, for fan type things, it definitely seems like the order is often heuristic -> ideology ->tribe.)
So back to the article.
What I see here is an example of this sort of thing. The Behavioral Economics guys are making a bunch of observations (which can be viewed as heuristics -- people will often engage in Sunk Cost Fallacy, people will often engage in Loss Aversion; if you don't have better data that's the way to bet as to their behavior). But in some individuals this gets transformed from a heuristic to an ideology or the opposing ideology.
For example I don't get A DAMN THING about the anti-nudge people. They seem to be too stupid to understand that EVERYTHING about a form or procedure or default is a choice, so why not design the defaults as best for society -- with "best for society something we debate and vote on if necessary". But anyway, you have have these anti-nudge people around, and they have their ideology; not just a heuristic that "nudge procedures are probably bad" but full-on "anyone who ever has anything nice to say about nudge-related issues is my MORTAL ENEMY". And that seems to be everything about why the article was written by Hreha.
And of course it goes all the way. Scott wrote something about this many years ago:
which I would summarize as "utilitarianism is a good heuristic -- but it's a HEURISTIC". You can either accept that as a heuristic there are cases where it fails, and try to figure out a better understanding of life -- or you can convert utilitarianism into an ideology, and willingly drive over the cliff if that's what your heuristic tells you to do.
Most of our political insanity seems to derive from what I've been saying -- people who can't tell the difference between heuristics and reality (ie when to accept that the quick answers of the heuristic might be invalid/sub-optimal); and people who refuse to accept that sometimes a heuristic is just a heuristic, not a buried ideology.
> > When the two biggest scientists in your field are accused of "systemic misrepresentation", you know you've got a serious problem.
Not necessarily? It just means your field is big enough to have accusers in it.
> There are lots of statistics, like “US police kill about 1000 people a year, and about 10 of those are black, unarmed, and not in the process of doing anything unsympathetic like charging at the cops” ... But somehow those statistics don’t start riots, and George Floyd does.
According to mappingpoliceviolence.org, the US police kill over 30 unarmed black people each year, and over 100 unarmed people of all races.
Anyway, to suggest Identifiable-Victim-effect-is-not-a-thing is obviously silly if you compare X identified victims with X unidentified victims. If the finding is "1 identified victim 'only' feels as bad as X unidentified victims" for X>100, uh, identifying a victim has a huge effect.
> Ad A: A map of the USA. Text describing how millions of single mothers across the country are working hard to support their children, and you should help them.
> Ad B: A picture of a woman. Text describing how this is Mary, a single mother who is working hard to support her children, and you should help her.
But of course, real life charities normally combine both: highlighting one victim, then citing statistics about the large number of victims. Did they not test the usual combination? Odd.
"If some sort of behavioral econ campaign can convince 1.5% of those 90 million Americans to get their vaccines, that’s 1.4 million more vaccinations and, under reasonable assumptions, maybe a few thousand lives saved."
Your math here seems like it might be using the wrong denominator. If 240/330 million Americans are currently vaccinated, then a 1.4% increase should mean either 1.4% of 240 or of 330, depending on what's being measured. In either case, it means the effect is bigger than you said!
Either way, dismissing a 1.4% effect size as generally irrelevant is insane to me.
I witnessed the Identifiable Victim Effect firsthand. Friends of my girlfriend were raising money for Zolgensma for their newborn. It’s Pretty Damn Expensive at about $2 million, but it cures the condition which is otherwise debilitating. To my surprise they succeeded, but the very fact they did it indicates that the donors were willing to save the life of one child at a cost which can surely save so many more people.
Please explain how the George Floyd protests have anything to do with Marxism, outside of the use of that word as a right-wing boogeyman. "The right-wingers are correct that Black Lives Matter is fundamentally Marxist in nature, except that's a good thing" is not a take that most BLM supporters would get behind. It just seems like you're 1. buying into the outlandish claims of far-rightists, even if you disagree with their values, and 2. trying to co-opt what's fundamentally a progressive liberal movement for your own ideology.
I'm not the person you put the question to, but I disagree with Marx's prediction that the middle class would vanish and the working class be pushed down, since it's the opposite of what happened. He beats Malthus in the competition for a prominent intellectual whose prediction was the opposite of what happened.
I also disagree with the labor theory of value, but Marx in _Capital_ concedes that, taken as a theory of exchange value, it is inconsistent with capital market equilibrium, so he disagrees with it too.
From each according to his ability, to each according to his needs - the creed of a slightly benevolent slave-owner. Marxism is a philosophy of everyone enslaved by society for the greater good, with the hope of a future where everyone behaves as if they are this kind of slave voluntarily, through some kind of conditioning.
I'm a socially progressive and fiscally center-left liberal/civil libertarian who broadly supports free-market capitalism, albeit more in the sense of European-style welfare capitalism than Reaganomics. Rather annoyingly, I'm very used to having my own stances dismissed as "Cultural Marxism" by people further right than me. (This is often more because of my social views than my fiscal ones; I've seen major corporations, right-libertarian market adherents, and even full-on anarcho-capitalists get called "Cultural Marxists" too, simply for being pro-BLM or pro-feminism or pro-LGBTQ). While this says more about the right-wing conservatives making those claims than it does about actual Marxists, it does incline me to distinguish myself from those actual Marxists as much as possible.
I'm also very used to getting berated by people further left than me, including many actual Marxists, for rejecting the idea of a revolution against the liberal capitalist system (which I consider to be extremely unlikely to happen, even less likely to succeed, and virtually certain to result in far worse outcomes than the status quo on the very slim chance that it is successful). Additionally, I often see Marxists defending regimes that I find absolutely deplorable (such as the Soviet Union or Maoist China or North Korea), or using false equivocations and specious logic to argue in favor of Communist Party dictatorships (typically something along the lines of: "all governments are fundamentally authoritarian and hierarchical, but at least Communist Party dictatorships are working towards freedom and equality for everyone in the long run, so they're better than all the other governments that only serve to uphold domination by the capitalist elite"). Furthermore, Marxism itself seems to be based around a sort of circular logic where any opposition to Marxism can be dismissed as either bourgeoisie deception rooted in naked self-interest, or "false consciousness" resulting from bourgeoisie propaganda. This is evident by how often I've heard Marxists condescendingly insist that I would agree with them if I actually read Marx, as if the tenets of Marxism are so obviously and irrefutably true that no one who truly understood them could genuinely disagree with them in good faith. (When I point out that I have a Master's degree in Political Science and have read The Communist Manifesto as well as Capital, they typically either accuse me of lying outright, make some vague accusation that I didn't fully comprehend the material, or move the goalposts by claiming that I need to read all of Marx's works before I can make a judgment.) All of these factors have left me with an extremely negative opinion of Marxists.
As for the problems I have with Marx's theories in their own right, that would require an even longer post. Though the fact that his predictions have almost entirely failed to occur (and that Marxists frequently need to creatively "re-interpret" what those predictions meant, in the same way as Christian apocalypse cultists or followers of Nostradamus) certainly plays a major role.
> it does incline me to distinguish myself from those actual Marxists as much as possible.
By doing so you are playing to the stereotype, since the very claim is that Marxist-style interpretations were applied to social issues, not that these people believe in Marxism. This ideology is especially attractive to well-off people, as it can be coupled with support for capitalism.
So why did the jury unanimously come to the opposite conclusion?
Much of the same bundle of reasons a jury found OJ Simpson not guilty, I expect.
As per jstr below. Do you believe it would have been possible for Chauvin to get a fair trial? It was obvious that acquittal, or even conviction for manslaughter, which was the most plausible charge, would result in riots, quite likely that it would result in the jurors being identified, demonized, very likely assaulted, possibly killed.
If you were on the jury and believed the proper verdict was acquittal or manslaughter, do you think you would have held out for it, producing, and being blamed for producing, a hung jury?
I assume the jurors were properly selected. That wasn't the problem.
Jury intimidation. The fact that it had been made clear, pretty explicitly by one congresswoman, that an acquittal would lead to riots, and was likely that even a manslaughter conviction, which there was some basis for, would.
"It was obvious that acquittal, or even conviction for manslaughter, which was the most plausible charge, would result in riots, quite likely that it would result in the jurors being identified, demonized, very likely assaulted, possibly killed."
What makes this 'obvious'? This alleged fear of angry black people and their left-wing allies has not led to similar results in the Michael Brown and Eric Garner cases, which didn't even make it to indictment. Pretty much true for Breonna Taylor, too, the 'wanton endangerment' indictment was practically dark humour.
Also, none of the jurors in the germinal case of Trayvon Martin, to the best of my knowledge, have since been assaulted or killed. Granted, that wasn't a police affair, but it centered around race and presumably deadly assailants with axes to grind against the system would not have been too picky.
I'm very surprised by the claim that Floyd wasn't murdered and I'd like to know why you believe that.
As you know, Chauvin was convicted of murder by a jury which determined that his actions towards Floyd fit the legal definition of murder. Do you think the legal definition of murder is wrong? Or do you think that the jury was mistaken? If the former, which of the legal elements of murder do you think are incorrect? If the latter, which elements did Chauvin's conduct fail to satisfy?
But, the DA's office saw all those things and chose to press charges, and the jurors saw all those things and chose to convict - unanimously - and the judge saw all those things and decided that as a matter of law they did not preclude a verdict of guilty on the charge of murder in the second degree or murder in the third degree. It seems everyone involved in this process - experts and non-experts alike - saw all the evidence that you claim exonerates Chauvin. That means you are asking me to take it at your word that if I watch the two half-hour police cam videos and read the autopsy report and the medical examiner's trial testimony, I will conclude that Floyd wasn't murdered. But it's your word against at least fourteen other people's words, all of whom are more familiar with the case than you.
So unless you can give me the specific element of murder that you feel was missing from Chauvin's actions, and the specific fact or facts that you feel demonstrate that this element was missing, I am absolutely not willing to take you at your word. "Just trust me, it's in the evidence" is not a legal argument. You are the one who seems to think that it is very important that people believe Floyd was not murdered, and yet the only thing you are willing to do to convince us is weave a vague and unsupported conspiracy theory in which BLM and the media somehow corrupted the jury. I think you'd convince a lot more people here in this particular comments section of your point of view if you could articulate an actual legal argument as to why you believe that Floyd was not murdered.
So why did the Hennepin County Medical Examiner--the organization that did the autopsy--put out a press release saying "manner of death: homicide"? https://content.govdelivery.com/attachments/MNHENNE/2020/06/01/file_attachments/1464238/2020-3700%20Floyd,%20George%20Perry%20Update%206.1.2020.pdf
Re: 2: this is a video of a doctor explaining how Chauvin's actions caused Floyd's death. It completely contradicts your point.
Re: 3: you are saying that Chauvin was clearly told that Floyd was in respiratory distress before he applied an unlawful restraint. Again, this completely contradicts your point.
The impression that I'm getting here is that you are unaware of the legal definition(s) of murder. Here is a reasonably good explainer of the murder charges against Chauvin: https://apnews.com/article/derek-chauvin-trial-charges-716fa235ecf6212f0ee4993110d959df
Key points from the article:
"Prosecutors didn’t have to prove Chauvin’s restraint was the sole cause of Floyd’s death, only that his conduct was a 'substantial causal factor.'"
The video which you linked is the explanation of why Chauvin's restraint, and the injuries and pain it caused, constituted a substantial causal factor. That's why I say it contradicts your point.
"WHAT’S SECOND-DEGREE UNINTENTIONAL MURDER?
It’s also called felony murder. To prove this count, prosecutors had to show that Chauvin killed Floyd while committing or trying to commit a felony — in this case, third-degree assault. They didn’t have to prove Chauvin intended to kill Floyd, only that he intended to apply unlawful force that caused bodily harm.
Prosecutors called several medical experts who testified that Floyd died from a lack of oxygen because of the way he was restrained. A use of force expert also said it was unreasonable to hold Floyd in the prone position for 9 minutes, 29 seconds, handcuffed and face-down."
The reason why the use of force expert is relevant is because Chauvin's use of force was only unlawful because it went against police training and because a reasonable police officer would not have used such force in the course of their duty. Generally police officers have a lot of latitude in determining appropriate levels of force, so it is extraordinary for a jury to find that a police officer acted unreasonably. That's why the copious video evidence and testimony from multiple experts were needed to prove that the use of force was unreasonable, and therefore unlawful.
There's more in the article - about the third-degree murder and manslaughter counts, plus loads of other links to individual pieces of testimony and explainers about their role in the trial and other implications.
I would recommend, before you continue calling people on here ignorant and lazy and ranting about how Scott and all of his readers have been hoodwinked by Marxist propaganda, that you acquire a single modicum of understanding of what the actual law is that applies to this case.
In particular, if you are going to argue that the jury was mistaken, you should be talking about the legal definition of murder in Minnesota and whether or not Chauvin's actions meet that definition. Because your presenting a set of facts which are completely in line with and clearly demonstrate the legal definition of murder and saying "see! no murder! you ignorant fools!" is neither a valid nor convincing argument that Floyd was not murdered.
The same question I put to dionysus above. Acquittal, even conviction for manslaughter, would have led to rioting. Do you disagree?
Voting for acquittal or manslaughter, especially if it led to a hung jury, would probably have resulted in the name or names responsible being leaked and those jurors and their families being targeted by massive hostility, possibly physical assault. Do you disagree?
If you agree with both propositions, do you think that if you were a juror who believed he was innocent or guilty only of manslaughter, you have voted that way?
Oh, are we just responding to each other's questions with questions? Do you think this is a good way of making progress towards the truth? What assurances do I have that if I answer your questions, you will answer my questions?
Hypothetical insinuations about the motivations of jurors are not even close to an argument that Chauvin didn't commit murder. One wonders why you are dodging the simple, direct question of which element of murder you think was absent from Chauvin's actions.
I recall a previous argument in which you took me to task for assuming that the authors of the Great Barrington Declaration knew that their proposals would have led to mass preventable mortality. But here you are pretending to know the motivations of 12 independent jurors based on your personal assessment of the likely consequences of their votes? I feel like you can't have this both ways.
So I'll answer your questions, why not.
"Acquittal, even conviction for manslaughter, would have led to rioting. Do you disagree?"
I believe based on evidence from most other protests of police acquittals that protests would have been mostly peaceful and smaller in scale than the protests of the murder itself. I also believe that there's an important distinction between rioting and vandalism. I'd say if there was rioting, 95% chance it would have been less severe than the 2018 Super Bowl riots in Philly. But more importantly, I am not morally responsible for someone's reaction to decisions if I do something legal and they choose to commit a crime, so while the causality here might exist, I do not believe it would impose an ethical duty.
"Voting for acquittal or manslaughter, especially if it led to a hung jury, would probably have resulted in the name or names responsible being leaked and those jurors and their families being targeted by massive hostility, possibly physical assault. Do you disagree?"
Yes, I disagree with the leak hypothesis, and I disagree that jurors who voted to acquit and their families would be physically assaulted. I don't know where you are getting the idea that BLM is like the mafia. Can you give a single example of a case where a cop was acquitted for murdering a black person and the jurors and their families became targets of violence? Like literally, where are you getting this stuff?
"If you agree with both propositions, do you think that if you were a juror who believed he was innocent or guilty only of manslaughter, you have voted that way?"
I would not vote to put an innocent man in jail in order to prevent riots or avoid having hostility directed at me or my family. That's crazy.
Haha 1 is very true. The beginning of Thinking Fast and Slow was full of gotchas, those little thought experiments where the reader submits to being tricked so the authors can prove a point. I skipped them all.
What about behavioral economics is friendly? Or unfriendly, for that matter? I didn't see anything about it being for anyone's "own good".
Like all sciences, the principles investigated by behavioral economics aren't good or bad. They're just there. Whether those principles are used to get you to tip your Uber driver or to encourage you to get vaccinated, you can't use the applications of a science as a condemnation of the science itself.
The fact is, we're manipulated by our minds every moment of every day. Behavioral economics is just investigating how and why. You can't hold that against it any more than you can fault biomedical science for enabling cosmetic surgery. If you're so averse to being manipulated, you should be studying more behavioral economics so you can learn how to avoid the cognitive biases in question.
I mean I’d absolutely condemn the scientific sub field of “kidnapping people or buying access to people in jail and forcibly administering them drugs as an experiment”, which has been done.
And behavior economics is “investigating” how and why in the same way that international relations professors and think tanks are just “investigating” - no they are not they are creating and implementing ideas too.
And cognitive biases are not real, believing they are real is useless, just don’t buy the thing being marketed!
He isn't describing behavioral economics. He is describing "libertarian paternalism," the policy advocated in _Nudges_ and based on behavioral economics. A natural enough confusion.
It is true that the "Nobel Memorial Prize in Economic Sciences" is officially the "Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel" and was not created or funded in the same way as the other prizes. However, as a claim that this somehow discredits the award is doesn't much make sense to me. The prize is still administered by the same Nobel Foundation using the same nomination and selection process. The prestige of the award comes from the process by which they select the greatest achievers in each area, not based on which rich person initially donated the funds for it.
https://en.wikipedia.org/wiki/Nobel_Memorial_Prize_in_Economic_Sciences
> The prestige of the award comes from the process by which they select the greatest achievers in each area, not based on which rich person initially donated the funds for it.
I have a bridge you might be interested in. I believe the vast majority of the prestige comes from the name "Nobel", associated with Nobel laureates that achieved demonstrably great scientific discoveries like all of the modern physics and chemistry, secondly from the royal pomposity.
If it was only the selection criteria which makes it prestigious, why nobody calls it the Swedish Riksbank Prize? Because then it would one of the field-specific academic prizes much fewer people outside the respective field have heard about, like Fields medal or Abel prize. For same reason, Nobel Peace Prize is still a big deal despite recent recipients' usually lackluster achievements in promoting fraternity between nations or organizing peace congresses or abolition of standing armies.
If the bridge comes along with international fame and a million dollars...
Note that what you wrote is not true of the Nobel Peace Prize, which was funded by Alfred Nobel but is awarded by a committee appointed by the Norwegian parliament.
I disagree slightly. I would modify this: nudges and behavioral economics and such often are attempts to manipulate people to give up their money and do harmful actions along with giving up the money - and not enough people are noticing and avoiding it. For instance, don’t buy things that you see in ads! Just don’t! A good general rule. Especially something like clothing or shoes - the manufacturing cost is like a tenth of the marketing budget so you’re literally just paying to see ads. And the food you see advertised will kill you a month earlier and make you subtlety feel like shit. And all of those companies use all sorts of behavioral Econ tricks and psychological whatevers, whether or not they “work”. And that compensates for the product being bad. I wish advertising didnt work...
Given the qualifications required to win a Peace Nobel, perhaps the difference should give you more confidence in the Economics Nobel.
About manipulation, I do think some of this "science" or science (depending on your view) has been harnessed in gross ways to serve advertising and commercialism.
On the other side though, anything that is planned or designed in an intentional way to serve groups of people, has this element of "nudge" or goal-orientedness built into it.
Parking, transportation, building, and office space design as well as zoning generally all have behavioral nudges built into them. Employer-provided health insurance plans, smart phones, computer operating systems, automobiles, drug store register displays, classrooms, the DMV, and city parks. Those digital speed limit signs are a behavioral nudge, as are old-fashioned regular speed limit signs for that matter.
There's no human-designed system free from built-in bias it wants us to conform to. It's all optimizing for something. The more we disagree with what it's optimizing for, the more I think we're likely to resent feeling nudged by it. Come to think of it, perhaps that's part of what so many people find relieving about being in the woods or in other nature, that sense that this system has no intentional designs on us. It's there for itself and its own ends and so gives us some relief from our own human-centered egos.
> I looked for the full text of Galenter and Pliner, but could not find it. I was however able to find the first two pages, including the abstract.
Your mistake was probably checking LG/SH for the paper itself. But in this case, SH has failed to associate the chapters with paper entries, even though it has the book: http://libgen.rs/book/index.php?md5=4B42C4388546A9A6A706368EE8AFA063 Anyway, here's just the paper: https://www.gwern.net/docs/statistics/decision/1974-galanter.pdf
When I read Scott saying that, I immediately thought "I bet gwern has it".
I do now!
For those who are too lazy to click the link: the quote in the post elides "In particular, the
asymmetry is one only of the magnitude of the exponent on the same side of unity" and is immediately followed by "What is not immediately obvious is the profound effect that a change in exponent of this magnitude produces".
On George Floyd and the Identifiable Victim Effect: I think a huge amount of the difference was external pressure, specifically the effects of COVID-19 lockdowns, and simmering discontent over Breonna Taylor's death, which shortly preceded US lockdowns.
Some of it is also bias in the people who are inclined to protest. There are notable protests against something less than a tenth of a percent of all murders in the US. Vilfredo Pareto is rolling in his grave.
Well, and not all Identifiable Victims are equal. If you put a picture of Betty White age 90 on a poster talking about limits to access to birth control for poor women, I bet nobody will give a shit. You need to put some ingenue that looks like everyone's teenage daughter. It's always a possibility that the experimenters were incompetent at designing their Identifiable Victim poster -- it is, after all, an art, and advertisers pay big bux to talented people who can do it effectively.
Good point. Also that experimental design graphically is person versus map, not person versus group if people. I can see why it’s difficult to indicate “group of people” graphically while preventing the viewer from locking onto one of the people. So they used an abstraction. But map hits all the nationalism/country buttons and I’m not sure it hits very many group of people buttons. They may have inadvertently studied single mom versus patriotism.
Well, "looks like everyone's teenage daughter" may backfire, but I see your point.
What makes George Floyd a better Identifiable Victim? If I was trying to gin up outrage a priori, he'd be a good ways down on the list.
The usual police explanation for killing someone is that they were in some sense a threat. Floyd was so obviously not a threat.
The police weren't claiming to have intentionally killed him, any more than they intentionally killed Eric Garner when he asphyxiated. But enough kids have been shot by police to rank well above him.
I agree with your point here, but want to point out that in a legal sense, "intention" has a nuanced meaning which includes the concept of what a reasonable person could foresee. If a reasonable person could foresee that Chauvin's actions had a reasonable chance of killing Floyd, then Chauvin's choosing to do the actions constitutes intent.
https://en.wikipedia.org/wiki/Intention_(criminal_law)
As to the question: I don't think the outrage in the Floyd case was about who he was, but about the extraordinary way in which he was killed, over a long period of time, with him and surrounding civilians literally begging the cops to stop killing him. So I don't think it was the identifiable victim bias at all, but the raw horror of that particular video.
If Chauvin had knelt on the necks of other people he arrested before without killing them, would that indicate that killing was not his intent?
I agree that Identifiable Victim has little to do with it: Dennis Tuttle and Rhogena Nicholas were identifiable victims of far worse police misconduct (made-up controlled drug purchases to get a warrant), yet the public response was basically nonexistent.
Floyd was on camera. That's it.
He was a large, strong guy and was resisting arrest at the time, but your statement also seems to be implying that the killing was a deliberate planned act. That's not what I saw on the video. The guy may not even have died from the hold he was in. IIRC he was also yelling "I can't breathe" at the top of his voice whilst standing up next to the cop car, so his breathing difficulties seem to have been somewhat independent of the neck restraint. And he was supposedly on drugs at the time that can cause difficulty breathing (or at least the perception of it).
That plus his criminal history and all in all, it's very unlikely anything to do with Floyd himself led to the explosion of interest in the case. Seems more like it didn't matter who he was or what happened, and it was more the time and context that mattered.
By the time he's handcuffed and on the ground, he isn't a threat.
I did read something arguing that people on drugs who act like Floyd did can suddenly and unexpectedly become very active and violent and a threat to themselves at least.
Damfino. If I understand why, I'd be making $10 million a year as an advertising guru.
The video. He spent nine minutes pleading for his life while Chauvin choked him out. It was horrific. If has little to do with his personal history.
Also presumably there is potential bias in how important the event is perceived to be. You could view George Floyd as the culmination of multiple related news item for instance. Also like the saliency of a full length video probably was important. I suspect you'd see a smaller difference if you played the floyd video vs a supercut of fairly unjustifiable police killings.
That.
I think there are thresholds, whether local (Ferguson/Brown) or national (George Floyd). And, on the national level, I think George Floyd death being so obviously wrong yet filmed was a big reason for its impact.
We felt it even in Europe and it's not like we didn't know your police kill unarmed black people in an over the odds way (and our own police is also heavy handed when dealing with our own disliked minorities).
I don't want to relitigate the whole thing but standing on the neck of someone who repeatedly tell you he cannot breathe and is no longer presenting a threat (if he ever did) is a-okay SOP for police? Are you sure?
Why would you present it as "standing on the neck"? Surely you know that's an outright lie, doesn't that undermine your position?
He had one knee on the back and one knee on the side of the neck. Hard to tell how much pressure he was exerting since his feet were on the ground, but some people were noting that Floyd's head was still moving and turning and other people reenacted the position and had no issues.
I don't think it's obviously wrong to be in that position exerting light pressure to control someone on drugs until the paramedics arrive. In my mind the obvious error was in not checking his breathing, pulse, etc. when concerns were raised about that.
Alexander Kueng did check for a pulse, actually. Neither he, nor Chauvin could find it. Thereafter, Chauvin knelt on Floyd for upwards of two additional minutes.
"On drugs" isn't a carte blanche. Floyd, handcuffed and half-asphyxiated, was to any reasonable observer clearly no threat to the four cops around him. And in any case, if Chauvin had actually been following standard police procedure (which the MPD police chief certainly went to great lengths to deny) that arguably constitutes firmer grounds for public outrage than the idea that he was a rogue racist cop.
The cop in question was convicted of murder, and given just how hard it is to convict a cop of anything, I would think that's a verdict we can probably accept.
>your police kill unarmed black people in an over the odds way
Not "over the odds" — a bit under, if anything. This is a media effect. The old SSC itself has an essay on this, and newer data since is even more convincing: police killings of black people are basically as expected given police deaths at the hands of black people, black officers shoot more readily than white officers, etc.
So I knew I wasn't being careful enough when writing that. The general feel I get from the statistics is that the black community is treated as "all criminals, some just haven't been convicted yet".
So if the police goes full blast all the time every time they encounter a black person, then yes, I think we can have the statistics we have and the feeling the black community has, which is (afaict, I'm not black etc) they don't have a police force, they are dealing with an occupying force.
I think traffic stops data (being a lot more common than unarmed shooting) is in a way better to understand that police do not treat black citizens with nearly the same leniency and tact accorded white people.
Note that even the below research is not full proof and the authors recognize it.
https://openpolicing.stanford.edu/findings/
I don't think the black community has that feeling in general. If I recall, in polls blacks tend to want somewhat more policing, because they are disproportionately the victims of crime.
Obviously there is a confounder here which is that American blacks commit crimes at a much higher level than other races. Anytime there is a discussion of black-white, I want to also see white-Asian (Asians committing much less crime than whites), men-women (men committing much more crime than women), blacks-immigrant blacks (many immigrant Africans are more successful than whites, not sure about criminality).
Again, I'm not black, not even American but my firm impression from being very interested in America is that the black community wanted *more police if it was *better police i.e. they recognize they're more likely to be victims, thus want protection but are unhappy when their nominal protectors turn out to be yet another issue they have to deal with.
https://news.gallup.com/poll/316571/black-americans-police-retain-local-presence.aspx
It's not so much the volume of interactions Black Americans have with the police that troubles them or differentiates them from other racial groups, but rather the quality of those interactions.
Most Black Americans want the police to spend at least as much time in their area as they currently do, indicating that they value the need for the service that police provide. However, that exposure comes with more trepidation for Black than White or Hispanic Americans about what they might experience in a police encounter.
pretty sure that police treat asians, women etc. better than blacks (yeah i know, have no hard stats to offer) but it's not really relevant because while the issue is being couched as institutional discrimination / police being racist pigs, it's IMO much more likely that the real problem is police having prejudices / stereotypes about different people (which as often w/ stereotypes may even be grounded in reality at population level) but that then negatively impacts individuals in the black population that are actually total innocent (like black harvard professor who got acosted by police on steps of own home because mistaken for a burglar, etc.) So police probably have same positive non-criminal stereotypes about asians as you do, but their resulting courteous treatment of that community doesn't say anything about how badly they may be behaving towards another.
That assumes we have accurate stats. The report on George Floyd initially said something like "died of a medical crisis during arrest". No camera, and it's not only not a crime, it's not even included in that stats for police killings.
The video mattered a great deal, yeah. It was unedited and free of commentary, and it narrowed down the room for equivocation on whether Floyd was a threat, which the police, and people who side with the police in these cases, would normally automatically claim.
As a result, it won black people a lot of non-black allies in this particular case, which was the main reason for the volume of interest and media exposure, and the feedback loop of anger.
Exactly - I think what Scott is missing here is that there wouldn't have been protests because of George Floyd if people hadn't ALSO been aware of the statistics (and other cases) saying that this kind of thing happens a lot (or anyway far too often). As with everything that has to do with communicating with lots of (different) people: it's easier to have a simple slogan/a posterhead for something than to always bring a complex message, which for me explains the George Floyd murals.
Also the video being painful and widely present on social media was a big contributor. Also a lot of activists who really wanted such a case to spread it!
COVID may have modulated the response up or down a bit, but I think you're under-selling the impact of the video of Floyd's death. It was powerful, it was tangible, and it was unequivocal.
A man being calmly choked to death for no great reason by police while the public stands around and tells them not to, while recording it on camera, was a novel experience. If it happens again I'm sure the protests will be smaller.
A single mother struggling is less novel.
I've got one.
There's another effect you didn't mention, too. You mentioned how people generally refuse an offer where someone offers to flip a coin and you get $60 if heads and lose $40 if tails.
If a stranger offered me that deal, I'd very reasonably assume that the coin was weighted to be very likely to land tails, or one of those coins with tails on both sides or something.
I assume there's a slight bias toward this kind of thing that people have learned from untrustworthy people. "What? You want to give me something that you say is equal value? Now I suspect that what you offer is worse or what I have is better than I think, so I'll keep what I have".
Scientific studies on results like those don't actually offer people that bet, though. They don't stop people on the street and ask them to flip a coin for money, they ask them to participate in a study to investigate their decision making, done by x researcher at y university and approved by an ethics committee. In reality people might think "this guy is trying to scam me" when asked if they would take a bet, but that explanation doesn't hold for scientific studies.
How DO studies like this work? I remember reading Thinking Fast and Slow and they talked about how when they worded easy questions in tricky ways, students in their studies would get them wrong. Well, sure -- the students have no incentive to try, right?
They don't, but social pressure, especially coming from an authority figure in a science lab, is a very powerful force. People will usually try because it's expected of them.
Example methodology from one of the experiments cited by Gal-Rucker (as far as I know this working paper was unfortunately never published, this is from a pre-preprint draft I received but I found it much better than anything they ended up actually publishing):
Method:
Participants, 417 individuals, were recruited through Amazon Mechanical Turk, and randomly assigned to WTA, WTP-Retain, or WTP-Obtain conditions. Scenarios varied by condition as followed:
WTP-Obtain:
Imagine you can buy the elegant stainless steel travel mug pictured below. It has a retail price of $25.
[Picture of Mug].
What is the most you would be willing to pay to buy the mug?
WTP-Retain:
Imagine you own the elegant stainless steel travel mug pictured below and that it is in new and unused condition. It has a retail price of $25.
[Picture of Mug]
Imagine you accidentally left it behind at a hotel. You receive a call from the clerk at the hotel stating that he can ship it directly back to you if you are willing to pay the costs. The standard shipping rate is typically about $25.
What is the most you would be willing to pay to keep from losing your mug?
WTA:
Imagine you own the elegant stainless steel travel mug pictured below. It is in new and unused condition and in the original packaging. It has a retail price of $25.
[Picture of Mug]
What is the least amount of money you would accept to sell the mug?
Afterwards, participants in the WTP-Obtain and WTP-Retain conditions whether they thought of their decision in terms of paying to prevent the loss of the mug or in terms of paying to gain the mug. Specifically, participants were asked, “how did you think of your decision,” with the two options being, “I was thinking of what I would be willing to pay to keep from losing my mug,” and “I was thinking of what I would be willing to pay to gain a mug.”
"Imagine you can buy the elegant stainless steel travel mug pictured below. It has a retail price of $25.
What is the most you would be willing to pay to buy the mug?"
Nothing. I don't use travel mugs and I distrust advertising bumpf like "elegant" and "stainless steel" in the same sentence. I'd need to see the picture of the mug to be convinced it was anything other than "functional piece like a hundred others with nothing distinguishing about it", and even then I'd be "I don't need or want one, so I'm paying nothing".
As for questions (2) and (3), unless the mug was something of sentimental value - my dear old granny bought it for me for my birthday before she sadly passed away, for instance - then if it's going to cost me as much to have it posted back to me as buying a new mug, the hotel can keep it and I'm buying a new mug.
For (3), plainly the minimum sale price would be $25 plus whatever postage and packing costs I have to pay to send off this mug; if I can find someone willing to pay more, that's a bonus. If I can find a mug (ha!) willing to pay me $100 for a $25 mug, then that's what I'm asking.
I think you are hitting the nail on the head here. I have often thought that behavioral economists have a flawed theory of mind behind a lot of their findings, in so far as they assume people think exactly like them (or should) and then run the numbers to see what deviates from themselves.
Especially on the selling side; how many people ever liquidate lots of stuff they have but don't like that much anymore? The annoyance and irritation of going through the transaction, not to mention the hidden issue of "maybe I will want that later and have to reacquire one", makes sitting on your stuff a lot more rational than assuming instant transactions accounts for.
I wouldn't pay $25 for a travel mug, that sounds too expensive, and I'm (presumably) many times richer than the average Mechanical Turker, for whom $25 might be an entire day's work.
If you ask me about buying the mug, then I'll answer as me. But if you ask me about retaining the mug, then I'll have to answer as a hypothetical version of me who actually liked that mug enough to pay $25 for it in the first place. Obviously, hypothetical mug-owning me is going to value the mug more than real me, as demonstrated by the fact that he already owns one and I don't.
A real world example of.... something.
I had a Williams and Sonoma Cantine soup bowl that I liked a lot. I didn't pay very much for it-- under $15, I think.
I broke it.
There was one available on ebay for $50. After some wavering, I bought it.
I happened to have a shard of the first bowl, so I could tell they were the same, but somehow, I didn't like the new bowl as much. The fancy crackled (?) glaze on the inside wasn't as entrancing.
Yeah, that's a decision based on more than merely "what was the original price of this item?" which is, I suppose, the whole point of irrational economic choices. I don't care about the hypothetical travel mug in the study mentioned above, so I wouldn't pay to have it posted off to me or need to be paid extra to sell it (I'd happily take extra if anyone offered, but if they offer me $25 for a mug that costs $25, okay fine).
It would be different if it was something I liked and valued and wanted a replacement for that was the same, not a new thing.
On the other hand, I bought a fairly cheap cast-iron trivet for my teapot, and I am very happy with it, and I get pleasure out of it every time I look at it. So if I lost it or needed a replacement, I'd probably be willing to pay that bit extra.
And the Gods of the Copybook Headers said: stick with the devil you know.
If you knew ahead of time exactly what joy you would get from the soup bowl, would you have been willing to pay $50 for it the first time?
If so, this might be a real-world example of your demand curve being identical at two points in time, with a willingness to pay which exceeded the market prices at both t1 and t2.
It feels like I would have paid $50 the first time-- and not replaced it when I broke it, but it's very hard to be sure.
It would be an interesting world if people could get emotional totals of the results before they made decisions.
I participated in similar studies in undergrad and in my experience they really do offer the bet. Typically the format was something like "you start with 1000 tokens to play with, at the end we'll give you $1/100 tokens" and then they actually have you play games with the tokens. Dunno how that specific study worked, but actually offering the bet is common, and in my experience it was pretty obvious that they would actually pay what they promised.
In favor of recovering Andrew Vlahos's point, (a) Once learned, heuristics carry over even when inappropriate. If people are behaving in a way that's rational in the real world, but irrational in a lab setting where the odds happen to be trustworthy, should we really emphasize their irrationality? (b) How do subjects know they can trust the experimenters? Psychologists routinely lie to subjects. It so happens that on the other hand, experimental economists have a strict code of *not* lying, but can the subjects always make such a fine distinction?
"In reality people might think "this guy is trying to scam me" when asked if they would take a bet, but that explanation doesn't hold for scientific studies."
am nitpicking here a bit but given that majority of these studies seem to be overtly telling participants they are studying effect X while secretly they are trying to detect an undisclosed effect Y, not too sure i'd be all that trusting of what study organizers tell me. I say nitpick because don't think majority of study participants would be sufficiently aware to think like that.
It might be possible to tease out that effect by placing people in a scenario where they provide the coin themselves vs the stranger providing it.
Then you run the risk that the person themselves is carrying an unfair coin. The best way is just let them flip the coin as much as they want to convince themselves it's fair, and then decide whether to take the bet.
Random test subjects often carry coins, but seldom carry weighted coins.
(And if I remember right, it's actually physically almost impossible to make weighted coins.)
In any case, there are even techniques for getting unweighted random bits out of weighted coin tosses, as long as they are independent (and identically distributed). But your average person might not understand them well enough to trust them.
This was my thinking as well. Virtually no one intentionally carries a coin that's biased. If a stranger approaches you on the street the likelihood you will be able to take advantage with a weighted coin is very low.
If someone offers you a seemingly advantageous bet, it's likely that there's something they're not telling you or lying about which makes it not advantageous, otherwise they wouldn't offer it to you.
Trying to demonstrate to the subject that some *specific* fact is not being concealed from them won't help. It's not as if having a biased coin is the only possible thing you might conceal.
> Trying to demonstrate to the subject that some *specific* fact is not being concealed from them won't help.
How does this not lead to an inductive proof that gaining trust is impossible?
People trust all the time, but it's when it makes sense to them. If the house offered $40 when you win and charged $60 when you lose, the person is very likely to trust the offer is legitimate as written. Of course someone is going to offer a path that gets them more money. When you offer to lose money to someone, they become suspicious, because that's directly counter to what they would expect. Offering someone a $50 service for $50 is also very trustworthy because it makes sense and is expected. There are lots of ways to build trust, if we can understand the motives of the person we are talking to.
"One of these days in your travels, a guy is going to show you a brand-new deck of cards on which the seal is not yet broken. Then this guy is going to offer to bet you that he can make the jack of spades jump out of this brand-new deck of cards and squirt cider in your ear. But, son, you do not accept this bet, because as sure as you stand there, you’re going to wind up with an ear full of cider."
-- Damon Runyon via Guys and Dolls
If the stranger flips it, you will have to worry about sleight of hand tricks.
Yeah, but in Laurence's scenario (the one I replied to) the stranger isn't flipping it. I think the "stranger-approaches-you-on-the-street" scenario is too stereotypically ominous to be useful in a study trying to eliminate trustworthiness as a factor.
if a stranger approached me on the street with that kind of proposition i'd be worried it's a distraction and am about to get pick pocketed / mugged, or at very least it's just a hook to get me started down path of some other scam.
Well, and there's inertia. You have some plans for the day going forward, and they rely on your having a certain amount of money available. If you suddenly have an extra $60, well that's nice, but it probably won't change your plans super much. But it's easily possible for plans to be derailed and require recalibrating (if only a little bit) if you *lose* $40 on which you were counting.
Id est, the brain-dead aspect of some of these experiments feels to me like the failure to appreciate the fact that the subject of the experiment has a history --the moment of his interaction with the experiment feels, to the *experimenter*, like it's just a moment cut out of time, standing by itself. But to the *subject*, it's a moment connected smoothly with all the other moments of his life, and the flow, from moment to moment, has a substantial inertia, because planning takes work, and recalibrating after a disturbance takes work. This would be a very rational -- and utility maximizing -- root of a Status Quo effect (or for that matter of loss aversion itself).
No behavioral economics experiments actually result in the participant leaving $40 poorer or $60 richer, though. For one, ethics committees would never approve a study that takes money from people, no matter how fair or well-informed the bet. Conversely, if half of the study participants walk out with $60 more, that's going to seriously hurt the scientist's budget. The most you'll see in an experiment like that is a raffle where one or a few of the participants win money according to the odds of two bets they choose between, like 90% chance of $10 and 10% chance of $50, or 60% chance of $0 and 40% chance of $20.
I’ve participated in some behavioral economics studies at my university, and they were giving away $30-$50 to every participant. I’m not sure how the funding model works here, but at some level if it’s government funded then society as a whole isn’t worse off so the government should be happy to give extra funds to studies that work like this.
It's still a bad way to study loss aversion though, because people's feelings about "free" windfall money that they've just been handed might be very different to their feelings about "real" money that they have earned.
I see, as a PhD student I might have a skewed impression and my superiors' research grants are smaller than most, but $30-$50 is not an excessive reward. My earlier point about the studies never losing you money stands, though.
Well, so you're saying the experiments don't actually test the real-world scenario? That's unfortunate.
There are rules about never letting them leave with less than they arrived, but you can hand everyone $50 at the door and then after 20 mins of questionnaires offer them a bet where they have to give $40 back if they lose..
How can we ever trust the results if the participants do not believe there are any actual consequences for their decisions?
This is the point of large random samples. Each participant has a unique history, but as long as they all have different unique histories, it won't prevent you from detecting a real effect.
Sure, but the problem here is you may not be able to collect random samples that include people with no plans going forward...because there are almost no such people. That is, you can't average out the influence of preexisting plans unless (1) there aren't any, or (2) they don't correlate with what you're measuring. Since we're measuring how people respond to the prospect of the gain or loss of *money* and money is pretty much part of nearly all plans, that's pretty tough. Maybe you could do it with something other than money.
Agreed and I think Scott's classmates didn't have loss aversion, exactly, it was something more like what you're talking about - they were just guessing the mindset of these people who made the rules. They just assumed the test makers thought this through so that dumb-sounding strategy wouldn't reward people
That would be even more relevant for pre-meds, who rapidly become *phenomenal* at "test-taking" skills -- meaning, sussing out what the designer of the test wants you to say, and saying it.
You must also consider the utility of the bet to the individual. There are people for whom $X (be it 40, 400, or even 4000) represents a difference between being able to feed themselves and their children this month, or starving to death. If you offered them a prize of $1000 on heads, and a loss of $40 on tails, they would be irrational to take you up on it, because the real choice is between a vacation vs. death.
This is correct. The coin toss example is simply risk aversion; there's no loss aversion or status quo bias or anything unusual required.
Let’s say you have $20 on you and want to buy a $15 sandwich. The deal may significantly harm your ability to buy an extra $4 drink after that. That variation is really bad for small numbers of cash you have on you. On the other hand, I’d happily take 10^5 of those in my brokerage account.
But that doesn’t matter __at all__ because the constructed survey environments cause people to give fake survey environment answers with nothing one can claim is relevant to a real world case
Loss Aversion is definitely multi-dimensional. A 40/60 bet on a 50/50 coin flip doesn’t exactly equal a $10 gain. Rationality is also a social tool. Loss Aversion has a built in loss of time aversion, loss of interest aversion (i.e. I have better things to do), loss of face aversion (”I don’t want to look stupid”), etc.
Some great points here. In particular:
> I understand why some people would summarize this paper as “loss aversion doesn’t exist”. But it’s very different from “power posing doesn’t exist” or “stereotype threat doesn’t exist”, where it was found that the effect people were trying to study just didn’t happen, and all the studies saying it did were because of p-hacking or publication bias or something. People are very often averse to losses. This paper just argues that this isn’t caused by a specific “loss aversion” force. It’s caused by other forces which are not exactly loss aversion. We could compare it to centrifugal force in physics: real, but not fundamental.
As you note, it's very important to distinguish the effect itself from the theoretical mechanism we think is underlying the effect. It's of course possible (and I think likely) that the long list of "cognitive biases" will be compressed into a smaller set of principles, but that's a distinct claim from saying the effect(s) used to posit those biases in the first place doesn't exist or doesn't replicate.
Re: the "size" of nudge effects, it always comes back to what your baseline is––and even what the null hypothesis is. While I think it's important to be careful about not overstating any effect, I also worry that sometimes effects are dismissed for being too "small". From a theoretical perspective: if a consensus model says the effect shouldn't exist at all, then any effect size is interesting and important and potentially disconfirms that model (upon replication, etc.). And from an applied perspective: yes, a nudge is no substitute for actually designing a good product, and it's entirely possible for companies to overspend on small nudges––but nudges can still be a useful tool in the toolbox.
To your point, I guess I'd just urge those nudges to be grounded in generalizable principles where possible, and perhaps there's a dearth of carefully articulated, quantitative theoretical models in the social sciences.
On the size of nudge effects, I was surprised to read the claim of how small it generally is because I thought automatic opt-in for employer-sponsored 401(k) plans was one of the more well-known nudges and that the impact of that was enormous, both in terms of total volume of dollars saved and increase in percentage of people participating in those plans.
It's been awhile since I've read the Nudge book, but I also remember discussion of a building design with a stairway on the outside with nice views all the way up and a significant increase in number of people taking the stairs over elevator as a result, but I don't remember the details of that one as well. And behavioral change from making fruit more accessible and dessert less so in cafeteria food presentation. It seems like one would need to debunk a lot of material to argue that nudges are hardly effective.
I'd be curious to know how Hreha arrives at the 1.4% figure. In general, the quotes from his piece create an impression of a marketing brochure poorly written. Every sentence has its own paragraph and big claims are made but are not well supported.
I mean a government incentive program isn’t really a nudge? That’s kinda like calling a tax credit a nudge ... doesn’t seem right
I think you've misunderstood the 401(k) thing.
Before: if you opt in to the 401(k) then you get an incentive to save for retirement.
After: you are automatically opted in to 401(k). You can opt out and take the money directly as salary.
If economics worked on pure incentives, then the change would not have any effect. In fact, people are lazy, lots of people don't opt-in, but also don't opt-out.
Ah. I still don’t think that’s a “nudge”, because - let’s say you’re mentally disabled and do not understand what a 401k is - automatic opt in will be good for you because you won’t know enough to either turn it on or off. Obviously that’s not a real example, but “people being game theory” is not true, but that doesn’t prove “cognitive biases” - some might just be dumb. I don’t think “a lot of people don’t either fully understand or fully pay attention to their 401k status, or don’t know how to opt in or out” is a sign of cognitive bias. While it may be an effective policy (I was unable to find any studies either way, oddly), and might be classified as a “nudge”, it’s not really modifying a decision they make - it’s making it for them.
I think it is also important to note that changing the value one way or the other is not costless. In that case we are back to status quo bias. What might be more interesting is how many people change their contributions after some change in the value of the 401k matching or taxes etc
One possible explanation, offered by my daughter when I was discussing this in a blog post years ago, is that switching is costless but deciding whether you should switch isn't. A good rule of thumb in many but not all contexts is that the default will be what most people do — that, after all, minimizes the administrative costs of allowing for people switching. If those other people who did look at the question have mostly decided to opt for the 401(k) it's probably what you should do, so you stay with the default.
If that's it, then it will stop working when and if a government actor who has read _Nudges_ is choosing the default to be not what most people prefer but what he thinks people should do, and the people eventually realize it.
i think it's not loss aversion but just people having 1) high inertia for this sort of stuff (i recently left a job and have been meaning to flip my 401k into an IRA but it's been 3 months and i have yet to get to it - it does take a bit of time probably making some calls and filling some forms) and 2) most people are financially illiterate so may be applying some kind of unconscious chesterton's fence approach here and leave default option on whichever it is because they don't have a strong view that alternative is better.
The 401k nudge probably mostly takes advantage of people's indifference and high transaction costs - it is rational for a person who thinks the authority making a decision is mostly benevolent to go along with the defaults unless that person knows their situation is unusual, has a high need for cognition, or has other pre-existing reason to change things up. Rational ignorance is not irrational if the expected value of learning enough to change the default is less than the cost of learning plus transaction costs.
The thing about the disutility exponent was weird and not clearly related to loss aversion. I think what they are saying is that if you gain x dollars you seem to value it at a utility of a*x^b, and if you lose x dollars you consider it a disutility of c*x^d (for some constants, a,b,c,d). The claim that they make (if I understand correctly) is that d > b.
So is this loss aversion? Well, would you get more utility from gaining x dollars than you would get disutility from losing x dollars? Well it depends on what x is (and on the exact values of a,b,c,d). What you know from d > b is that if x is sufficiently large, the loss disutility is bigger than the gain utility and that if x is sufficiently small, the gain utility is greater than the loss disutility.
Is this loss aversion? Not really.
I think that loss aversion is generally framed as reffering to absolute amounts rather than utility.
So, 'losses have a higher disutility than gains have utility' is an explanation for why people are loss averse, but not a refutation of the phenomenon of loss aversion.
Huh? I don't follow at all what you are trying to say here.
When economists talk about loss aversion, it's usually in the form of 'people would rather not take a 50/50 bet of either winning $101 or losing $100'.
One explanation for why people wouldn't take that bet is because the utility of gaining $101 is less than the disutility of losing $100.
But that's not a proof that loss aversion doesn't exist. Loss aversion is literally just what we call the fact that they won't take the bet, which is true - they won't.
It's just a good explanation for why people are loss averse in certain domains.
Loss aversion is used colloquially in a bunch of different ways, but formally it means being more risk-seeking for losses than for gains. Convex utility represents risk-seeking behavior, whereas concave utility represents risk-averse behavior. In this example, if d>b (assuming d and b are both positive), the loss utility is more convex than the gain utility, therefore less risk-averse/more risk-seeking.
In case anyone wants a clarification on the difference, x^2 is convex, x^(1/2) is concave. 0.5(0)^2+0.5(100)^2 is greater than (50)^2, so with that utility, you'd prefer a 50/50 at $100 to $50 for sure. OTOH, 0.5(0)^(1/2)+0.5(100)^(1/2) is LESS then (50)^(1/2), so with that utility, you'd prefer to get the $50 for sure over the 50/50 shot at $100.
*Is* that what loss aversion means? I mean consider:
https://en.wikipedia.org/wiki/Loss_aversion
"Loss aversion is the tendency to prefer avoiding losses to acquiring equivalent gains."
This is more or less how I've heard the term used. I don't think that I had ever heard the asymmetric risk aversion version.
In my field (economics, decision theory more specifically), it’s used in the risk sense. The third cite on the Wikipedia article, e.g., is a Koszegi-Rabin paper looking specifically at risk attitudes. In other fields it may be used in different ways, but it’s hard to make it, one, distinct from diminishing marginal utility, and two, meaningful, from a rigorous economics perspective without incorporating risk. For example, the Wikipedia article goes on to describe, “Loss aversion implies that one who loses $100 will lose more satisfaction than the same person will gain satisfaction from a $100 windfall,” but “satisfaction” isn’t a thing we can measure. Utility is ordinal, and it can only meaningfully be described as cardinal in a risk setting. We could talk about someone being willing to work more hours to avoid a $100 loss than to get a $100 gain, but that’s no different from having declining marginal utility of money. Similarly, the comment above, about how people won’t take a 50/50 win $101/lose $100 bet. That’s just risk aversion; if we call that loss aversion it’s not a new thing.
The only way I can think of to get a meaningful statement about loss aversion without incorporating risk would be something like, e.g., someone is willing to work up to 9 hours to increase their income from $100 to $200, but willing to work 10 hours to avoid decreasing their income from $200 to $100. I’m not aware of a loss aversion experiment based around that idea, however.
I mean it could mean that the derivative of their utility with respect to gaining money is 1 util per dollar, but the derivative of their utility with respect to losing money is 2 utils per dollar. This isn't really the same as simple risk aversion, which basically means that their utility function has a negative second derivative. It says that their utility function has a negative delta function as its second derivative at 0.
You could for example have someone who wouldn't take a bet to have a 50% chance of gaining $101 and a 50% chance of losing $100, but would rather take 50% odds of gaining $100 than 100% odds of gaining $50 and would rather take 50% odds of losing $100 than 100% odds of losing $50.
This person would be risk-loving unless the risk was distributed across the break even point. This might be an example of loss aversion.
That could happen and would definitely be weird, but I’m not sure why we’d call it loss aversion. Like, if we tweak this so the person would rather take 100% odds of gaining $50 dollars over 50% odds of gaining $100, then it’s the standard loss aversion risk thing I started with, and it makes sense to me that person is loss averse: they’re cautious with gains, but will go all out to avoid loss. But the example where they’re risk-seeking with gains, risk-seeking with losses, but risk averse with the possibility for gain or loss…I’m tempted to say it’s “uncertainty” aversion? As in, I’m fine with I’m definitely getting zero or a gain, I’m fine with I’m definitely getting zero or a loss, but I could get a loss *or* a gain? That’s too much uncertainty, I can’t deal.
One difference between Floyd and the other 10 unarmed black guys per year is that we have full video of Floyd not being threatening, and a long time period where things could have been resolved in some other manner. For the others, it's easy for us to be skeptical and wonder, were they reaching for their pockets? Did the police know them from other incidents? Were they wearing gang regalia? Etc.
And of course when you see it all on video, it is natural to scale things up in your mind, wondering how many of the other incidents were similar, but didn't get filmed for whatever reason.
This. "US police kill about 1000 people a year..." is text. The George Floyd case was video. That's a massive distinction.
Well, I sure hope you cherry-picked those quotes from Hreha, because the collective impression I get from them is that he's an axe-grinding narcissist. I'm interested in this stuff, but the obnoxious tone of the quotes has persuaded me I can think of a better use of the next 10 minutes. What a putz.
Thank you. Very much my thoughts. Ifmd add “self-seeing”; it sounds like he’s in a competing business.
It sounded to me like he's a consultant trying to sell himself for millions of dollars to corporations, and this is just a salvo about why you should hire him instead of other consultants who use behavioral economics.
I sort of worked in this field for awhile at my old company, and it very much sounds like a discussion focused on the business aspects rather than the science.
The two are very different discussions - anything on in the real world has so many confounds, that you can't just uncritically apply a single theory and expect it to predict everything on its own.
For example, you can never see the result of 'a nudge' on sales in the real world - you can only see the result of 'one more nudge' in addition to the hundreds already in place, including all the ones by all of your competitors. This will never give the same numerical results as a single nudge in a controlled lab experiment, but that doesn't mean the effect measured in the lab isn't happening in the real word.
Don't you mean... "Bhrehavioral Economics"? He-yo!
I think George Floyd to Mary is apples to oranges. Mary is a picture of a probably fictional woman, and her problem is that she's struggling with money. George Floyd was a 10-minute video of a real person, and the problem is that he was murdered by someone who would probably have gone free without all the media attention and protests. I'd also add that the George Floyd incident happened at a strange time in the US: it played into an ongoing story about the relationship between the police and black people, and also a ton more people had suddenly found themselves with free time to go to protests and stuff.
Maybe a better comparison would be: if you watch a short video following a day in the life of a single mother, are you more likely to donate than if you watched a lecture about poverty and single-motherhood? I bet you're at least more likely to have an emotional response. But at that point, is it still the identifiable victim effect? It might be comparable to Hrera's claim about loss aversion: it's real when you turn up the volume knob, but not at "nudging" levels.
Tentatively on the identifiable victim: Maybe inspiring help isn't the same thing as building anger?
The Innocence Project, which helps falsely convicted people get out of prison, doesn't get nearly as much attention as BLM, even though the Innocence Project is helping specific individuals.
https://innocenceproject.org/
I think you have an important point here. There wasn't much 'help' involved in the protests and riots over the summer, mostly a lot of virtue signaling admixed with destruction of property and looting. I am not a big fan of the last two myself, but apparently lots of people find them enjoyable enough to require making it illegal to prevent it. Some people really like helping too, don't get me wrong, but it is probably easier to tip people over the edge of "You kind of like to smash and steal shit? Well, here's a justification to do so! People might even write books saying you were noble to set things on fire!" than the edge of "Spend some more of your own time and money to help someone else."
I think your intuitive reluctance (and your medical school friends) to guess on your test is probably founded in what Ole Peters has been talking about now with ergodicity economics for a while now (individual actors don't experience the average across time).
https://www.bloomberg.com/news/articles/2020-12-11/everything-we-ve-learned-about-modern-economic-theory-is-wrong
Very well said -- my feeling pre-Ariely was that behavioral economics was a much more rigorous field than social psychology and I agree that this continues to seems like the case.
Nudges though! My feeling about nudges *as a policy project* rather than as a research project is that they were an intriguing but failed idea. Retrospectively they are a strange thing for governments to focus on. Can nudges help you maintain a park or build a railway or fight a war or administer a social insurance program or do any of the other "big" things that government does? Clearly not! So why were some of the smartest people in US government 15 years ago so focused on nudges? Maybe because, in the US, the real problems were seen as unfixable and so nudging was the best you could do...and so nudges were overstated to the point where they would not just make incremental benefits but solve real problems.
In this sense, I would argue that even if a 1% gain is a big deal in absolute terms, they are a bad use of a scarce resource (political will and executive initiative) that in a sane world would be invested in much higher-ROI projects.
(by 15 years ago I mean 10 years ago. Evidently the Obama era is receding even faster in my mind than in real life...)
>Can nudges help you maintain a park or build a railway or fight a war or administer a social insurance program or do any of the other "big" things that government does? Clearly not!
I disagree...
Nudges could, in theory, make people litter in your park less (decreasing your maintenance costs), make workers on a railway more attentive to procedure (making building faster and cheaper), remind people to fill out and submit their paperwork for a social insurance program (increasing coverage and efficiency), and get people to sign up for the army (helping you fight a war).
Of course, nudges can't do any of those things ON THEIR OWN, you do need an actual program to do the thing. But nudges can be part effective parts of that program, in the right conditions. Again, their power is that they're cheap additions to an existing program, and can be cost-effective.
And as for 'they went crazy for it then', remember that whenever a new technique pops up, there's generally a ton of low-hanging fruit that it can be applied to for large gains, and then after those are all taken care of the marginal gains of applying it again are less and less. O we should expect to see an explosion of applications when the idea first catches on, and then less use for lower returns later.
But where were the early gains from nudges? American government is multiples worse than peers in some areas, especially infrastructure building (look at ARRA for an example from when Sunstein was in the exact perfect position to make this work better federally!) and whatever effect nudges have had seems to have been drowned out utterly at best.
American government being bad at a lot of things doesn't mean everything they do is bad, nudges could be good but not able to compensate for every other problem the US has in every possible area.
Especially when the baseline we're comparing to is some unspecified list of other governments, and we have no idea whether or to what extent they have also used nudges in their own policy.
There's the British 'Nudge Unit' which started off as part of the government and was then spun-off into its own thing, the Behavioural Insight Team, which is part-owned by its own employees, a charity named Nesta, and the government.
https://en.wikipedia.org/wiki/Behavioural_Insights_Team
https://www.bi.team/
https://www.nesta.org.uk/
The purchaser was the N.I.C.E., the National Institute of Coordinated Experiments. They wanted a site for the building which would worthily house this remarkable organisation. The N.I.C.E. was the first-fruit of that constructive fusion between the state and the laboratory on which so many thoughtful people base their hopes of a better world. It was to be free from almost all the tiresome restraints – “red tape” was the word its supporters used - which have hitherto hampered research in this country. It was also largely free from the restraints of economy.
"Has anyone discovered," asked Feverstone, "what, precisely, the N.I.C.E. is, or what it intends to do?"
"That comes oddly from you, Dick," said Curry. "I thought you were in on it yourself."
"Isn't it a little naive," said Feverstone, "to suppose that being in on a thing involves any distinct knowledge of its official programme?"
"Oh well, if you mean details," said Curry, and then stopped.
"Surely, Feverstone," said Busby, "you're making a great mystery about nothing. I should have thought the objects of the N.I.C.E. were pretty clear. It's the first attempt to take applied science seriously from the national point of view. Think how it is going to mobilise all the talent of the country: and not only scientific talent in the narrower sense. Fifteen departmental directors at fifteen thousand a year each! Its own legal staff! Its own police, I'm told!"
"I agree with James," said Curry. "The N.I.C.E. marks the beginning of a new era - the really scientific era. There are to be forty interlocking committees sitting every day, and they've got a wonderful gadget by which the findings of each committee print themselves off in their own little compartment on the Analytical Notice-Board every half-hour. Then that report slides itself into the right position where it's connected up by little arrows with all the relevant parts of the other reports. It's a marvellous gadget. The different kinds of business come out in different coloured lights. They call it a Pragmatometer."
"And there," said Busby, "you see again what the Institute is already doing for the country. Pragmatometry is going to be a big thing. Hundreds of people are going in for it."
"And what do you think about it, Studdock?" said Feverstone.
"I think," said Mark, "that James touched the important point when he said that it would have its own legal staff and its own police. I don't give a fig for Pragmatometers. The real thing is that this time we're going to get science applied to social problems and backed by the whole force of the state, just as war has been backed by the whole force of the state in the past."
And while all of these claim worthy-sounding aims, I still get a combination of 1984 and C.S. Lewis's N.I.C.E. from the whole operation. "Working towards innovation in social good" sounds very pleasant, but the methodology also sounds very manipulative.
My explanation for Hurricane Floyd is that it was caused by an assumption from the media that Trump was toast after he mishandled coronavirus and they could let it out into full flower.
My perception of my own loss aversion is that it's not so much losses that I'm averse to, but situations where I feel like an idiot. Losing even a small amount of money will bother me, if I lost it by making a silly decision.
I don't enjoy gambling, because I know that I'll get more displeasure from losing $100 gambling ("I'm such an idiot, why did I gamble?" than for winning $100 by gambling ("whoop de freakin' doo, a hundred bucks, not exactly life-changing money, is it?")
But I know that there are some people for whom this is reversed; the pleasure of winning $100 is more significant than the displeasure of losing $100, and these are the people you'll find filling up casinos. Not all gamblers are idiots who don't understand probability, some of them just have a mildly broken sense of loss aversion.
Maybe it's a question of one's tendency to magical thinking. The gambler probably rationalizes the events exactly the opposite of you: if he loses $100, well, that's not life-changing, and it's probably because of bad luck or [insert random cause here, failure to wear lucky socks, whatever], whereas if he wins $100 he feels like a freaking genius, because he has worked a system the boring old rationals (like you) don't grasp -- he really hit those Hail Mary's he was muttering under his breath, wore his lucky socks *and* underwear, put it all on good ol' lucky 13, et cetera.
Prospect Theory is touted as superior to Expected Utility Theory, and it has more parameters and so can fit data better within any given sample. But Prospect Theory parameter estimates even within subject also don't generalize across choice situations. Estimate them in one experiment, put the subject in a different experiment, and you will get different estimates. One explanation for this problem is here: file:///Users/apple/Downloads/Stewart_Canic_Mullett_2020.pdf
file:// is a file on your hard drive. You need to upload it to a http:// type link on a website, so we can see it.
Agreed my problem with prospect theory is not that I don't think it exist in some sense. But I don't think it is a very useful tool to replace what it was meant to replace.
Here is a working link:
https://psyarxiv.com/qt69m/
I'm out on behavior economics. It's easy to make an experiment that shows how dumb and irrational people are for not trading away their old coffee mug for five dollars when it's "worth" less than that. In the real world simple heuristics (it ain't broke don't fix it / buy a new one that might not be as good) are very effective and seem to add up to something approximating rational behavior.
For example it's well known that building / expanding roads doesn't decrease traffic congestion - more people drive until the level of congestion has reached an equilibrium. Things like that seem better modeled by rational behavior than anything from behavioral economics.
Not quite sure what your point is wrt to roads - if usage increases to hold congestion constant, you haven't benefited the set of people who were using the roads before but you have presumably benefited all the people who are now using them that were not previously, so expanding the road network has had tangible benefits.
My point is drivers and many other real world actors can be modeled by rational assumptions. Behavioral economics seems more oriented towards clever experiments with questionable validity.
The issue often comes from the way that development is shaped by traffic. If you cut transit times by expanding a road, developers tend to build further from the city center. This increases road usage (as people have longer commutes on average), and the roads will return to the same level of congestion. Conversely, keeping roads the same size (or even decreasing their size) tends to encourage development that is denser and that results in a lower average commute distance.
You could argue that this still benefits some people, as less dense development at the periphery tends to be cheaper than denser development in the core of cities. Thus expanding roads is essentially a type of subsidy for housing. However, it is an extremely inefficient subsidy for housing, as expanding roads is extremely expensive, uses land that could otherwise be used for development, and makes cities less walkable and pleasant.
Obviously there is some sort of middle ground. The above argument doesn't imply that all roads should be single lane roads. However, the world is extremely complicated, and often times second order effects (such as changes in development patterns) are more impactful than the intended effects of an intervention (such as decreasing traffic congestion).
One issue with using behavioral economics (and, to a greater extent, the social sciences) to justify interventions is that they often fail to predict the second order effects of interventions. Often, simple heuristics are more robust to these second order effects than sophisticated but highly parameterized theories (such as prospect theory).
In general, I still think that behavioral economics (and even the social sciences) can be beneficial in guiding government policy. However, I think we ought be conservative in their application, and expect that significant parts of the theories will have to be modified as they transition from lab to practice.
Why should commuters in your example care about distance commuted as opposed to time spend commuting? Surely they wouldn't mind commuting thousands of miles if they could teleport the distance instantly.
I agree that travel time matters much more than travel distance. The point I was making is that if the government makes an intervention to reduce congestion and decrease travel time, but induced demand ends up eroding away all (or at least a substantial fraction of) the gains that were initially realized, then the intervention will be significantly less effective than predicted.
Example: you are a city planner working on a new road. You estimate that the new road will cost $100 million to construct and will save 10 million person-hours of commuting over the next 10 years. This project is thus projected to save time at a rate of $10 per person-hour. This very well may be worth it for taxpayers. However, if things like induced demand result in the new road only saving 1 million person-hours over the next 10 years, then you only save time at a rate of $100 per person-hour. This would seem to be a very poor use of taxpayer money.
Ahh I think I see the problem then, how you measure success. If you are trying to cut commute time for people already living in the city's environs, sure, that isn't so hot. If you are trying to make the city more appealing such that more people want to live and work there, then it was a pretty big success. Or if you were measuring success by "number of people who can get to work in the city within 1 hour" things would look pretty good.
Deciding what goals one wants to achieve is pretty difficult.
I'd actually argue that many road construction projects are suboptimal even when your goal is to make the city more appealing to live and work in. Larger roads make the city more appealing in the near-term, but often fail to make the city more appealing in the long-term. Quality high-density housing and public transit are more capital-intensive than roads, but they also have much larger potential returns. A city that invests more in housing, parks, and public transit, and industry will see slower growth than a city that invests more in roads, but the growth is more likely to be sustainable, and the city is likely to generate more utility in the long term.
Further, there are issues like the Braess Paradox in which building new roads in the wrong location can lead to longer travel times for everyone, even without induced demand.
Rational behavior implies that expanding roads does decrease traffic congestion, just not by as much as it would if people didn't respond by using the road more. Isn't that obvious? The reason more people are using the road is that it is less congested. If it were not, total demand for use of that road would be what it had been before the expansion.
On the quantum scale, particles behave unpredictably. On the large scale, they very closely approximate classical physics, and quantum physics doesn't matter. If you could manipulate behavior at a quantum level and at scale, you could change how classically-sized things behave. But in practice, quantum physics lets us build weird computers and does little else.
We could throw quantum physics out, model everything classically, and retain nearly everything that matters. I don't think this is a good argument for doing so.
Understanding how a thing really works is important. It helps you predict how a system behaves in extreme scenarios, it gives you extra leverage for controlling outcomes, and, insofar as science is the pursuit of knowledge, it's the obvious next step. You may not need to understand relativity to design a car, but that doesn't mean studying relativity is a pointless exercise.
Yes I agree with all this, but evolutionary psychology stuff I’ve seen about heuristics seems better at getting to the “foundations” of behavior. (Also it’s been a while since I looked at microeconomics stuff but my understanding is a lot of this is an active area). Behavioral economics seems to be mainly about poking holes in “rational behavior” with simplistic experiments with little real world validity. That coffee mug experiment (I’m too lazy to look up the details but pretty sure there have been experiments like this) tells me close to nothing about the real world. In the real world there is no perfect knowledge about how much something is worth, things are almost never exact substitutes, counter parties are not completely trustworthy, etc etc. So generalizing from that kind of experiment is likely to lessen our knowledge of the world, if anything.
Kind of like social psychology, I feel like much of that field (or the research the media has picked up on) has actually lessened our understanding of the world. More like astrology than quantum mechanics.
I think I should expand what I meant because maybe my example wasn’t phrased very well. What I meant was: imagine a two lane highway connecting a suburb to a city. In perfect conditions with no traffic, it’s a 15 minute drive. On 9AM on an average weekday, it’s a 45 drive because of traffic during rush hour. Some people who are very time sensitive change their work schedules to go in later, around 10, other people stay home and work from home. People plan their grocery shopping etc for later in the day when there’s less traffic.
The road is expanded to three lanes, the planners naively expect traffic to decrease 50% and commute times to decrease because of increased capacity / less traffic. But instead the commute times are still around 45 minutes - more people use the road now because fewer are staying home / changing their schedules to work later, etc.
Probably I should have used a simpler like prices - higher prices mean people buy less of something.
Even something like "now there's a new road it's easier and quicker to get to the city, so now it's worthwhile for people to apply for jobs there/new factories or businesses to set up in the city because they can increase their workforce" and now you have more people wanting to travel in to work at 9 in the morning, so the new road gets clogged up the same as the old one.
The limiting resource staying the limiting resource doesn't mean you don't invest in it. It continues to remain the best bang-for-the-buck.
Gerd Gigerenzer provides a better explanation than Kahneman and Tversky.
https://en.wikipedia.org/wiki/Gerd_Gigerenzer
"Gigerenzer investigates how humans make inferences about their world with limited time and knowledge. He proposes that, in an uncertain world, probability theory is not sufficient; people also use smart heuristics, that is, rules of thumb. He conceptualizes rational decisions in terms of the adaptive toolbox (the repertoire of heuristics an individual or institution has) and the ability to choose a good heuristics for the task at hand. A heuristic is called ecologically rational to the degree that it is adapted to the structure of an environment. "
He's mentioned in "The Undoing Project" by Michael Lewis. Apparently Kahneman and Tversky hated him. No wonder. He provides a simpler but far less dramatic explanation than they do.
His books are well worth a read.
What's described in the article above shows that there are more complex heuristics for loss aversion. It's not that people are illogical, but instead that they are working in a world of constant uncertainty and you know more about what you actually have.
Kahneman pioneered "adversarial collaboration" with one of Gigerenzer's co-authors.
I second Gigerenzer's work being very valuable.
Would you consider doing a piece that highlights some meaningful good science?
I’m sorta at the point where my trust in academics is at an all time low. It might be helpful to signal boost ethical people doing good work.
I remember at one point I had to stop reading Andrew Gelman’s blog because it’s just so demoralizing hearing how my tax dollars are fueling these petty narcissists to propagate lies.
Not a direct response to your ask, but Scott did write these two posts way back that are loosely related:
https://slatestarcodex.com/2017/04/17/learning-to-love-scientific-consensus/
https://slatestarcodex.com/2014/07/02/how-common-are-science-failures/
The 2017 post especially made an impression on me as someone who felt super-disillusioned with the replication crisis.
Thanks! I'll check them out.
There's good science out there, but it's probably too technical and not people-oriented enough for this blog? For instance, computer science is a mix, but there are quite a lot of genuinely insightful papers out there in sub-fields like computer graphics, AI, systems engineering etc. On the other hand there are also sub-fields that are a bit more questionable like, erm, AI, also advanced theoretical cryptography has subtle issues that a lot of people aren't familiar with.
I think what these better papers have in common is that they're about inanimate things instead of people. The moment people or even worse, large groups of people go under the microscope it seems reliability goes to hell.
True but this blog focuses on social science research (mostly)
Sorry to pile on your disappointment, but yea, most social science research is low value. Without a good way to test the implications of theories in the real world, most social science boils down to "What do we WANT to be true?" That isn't limited to social science, of course, see e.g. climate change, but social sciences are just steeped in the problem. Even those trying to leverage statistics and other more rigorous methods run into the problem of "The Dreaded Third Thing", just having missed accounting for one variable that would have changed the entire outcome. And of course the bigger problem that when you want to find something is true, given enough data, statistical methods and time, you will find a statistical analysis that says it is in fact true.
If that sounds too unoptimistic, consider that social sciences don't even seem to be very good at describing what people actually do, much less coming up with ideas of how to change it. My personal favorite in this space is the statistic that 1/4 women are sexually assaulted in college. Obviously this is false without having a really wide band of what "sexually assaulted" means, which turned out to be basically anything unwanted like being bumped into exiting the bus. That is bad enough on its own, but it was taken seriously as the stronger definition of sexual assault. Social scientists that behave as though they believe that 25% of women get raped in college, and their parents still pay thousands of dollars to send them there, are not doing a good job understanding the social world around them.
It would be interesting to see studies of how people parse the words they hear. It's taken me a while to stop hearing "shot" as "killed".
Hence everyone repeating about a crazy guy who shot up a pizza parlor without nuance for the fact that he didn't shoot any person and wasn't trying to shoot any person. Yes the guy is crazy, but saying someone going out and "shooting up" a place has an entirely different context.
I mean literally every single thing you touch was made or enabled by a technology that depends indirectly on hundreds of thousands of pieces of basic research. The chair you sit on was made by a machine who’s steel was optimized using mathematical basic research and physical knowledge of steel gained via microscopes and atomic theory and thousands of lab tests of steel composition and strength... etc, you can go down millions of different paths like that
Any big hard science or technical journal available now, nature, or even arxiv, etc, will have piles of good science. I skim a few papers a day on average just for fun and a lot of them are good science!
I think the world around me has been built by research conducted mostly by industry. Not much of what's around me has been informed by the academic publishing complex. I'm sure there are papers but those are more than likely tatamount to explaining to birds how how they fly.
This view is informed by the scores of academics I've met who, with the exception of material science, never seem to hold their own research area in high regard in terms of applicability.
I don’t think this is true at all. Directly, you’re right. But the industry builds heavily on basic research and academic research. The people discovering the mathematics and logic and structure and number theory and calculus were at universities. Physics professors did the basic quantum research, nuclear physics, bio, etc. same goes for pharmaceuticals and biotech stuff - the industry builds ridiculously heavily on basic research in academia. Materials science obviously, all sorts of chemistry, experimental physics - universities do LOTS of research. Yes, they aren’t scaling up production lines. But the production lines wouldn’t be there without the many many different studies and groups at universities across the globe doing work.
You might do well to read Matt Ridley's "How Innovation Works". For nearly all of human history, researchers were trying to figure out why what engineers did worked, as opposed to engineers learning from researchers and just scaling things ups. There is of course some of both, but more often than not people making things are tinkering around at the edge of knowledge and tying together lots of different ideas and stumbling on clever things.
So ... it all depends on both researchers and engineers? Which is what I’m trying to argue?
No, actually quite the opposite. You are putting a lot of weight on universities, and I am arguing that very little weight goes there. You said "I mean literally every single thing you touch was made or enabled by a technology that depends indirectly on hundreds of thousands of pieces of basic research." That is simply false, and pointed you to a book that does a nice job demonstrating that. Then again, perhaps reading isn't your thing.
Yea this checks out. In the world of finance, academics are totally useless. There's some academic influence in some certifications like the CFA and in MBA programs but those theories were largely developed as post hoc explainations of market participants.
The current instatiation of Academia doesn't seem all that valuable other than a jobs program for overachievers.
The book, which I’ll at least take a look at is - http://library.lol/main/D416293CAA3A5590B13B399791B4FB82
Sounds like bullshit, or at least myopia, to me. Quantum mechanics wasn't worked out to explain the workings of a transistor invented in 1890. Watson and Crick did not noodle out the structure of DNA to explain genetic engineering already being done commercially in the 1940s. Einstein did not work out relativity to help improve the otherwise inexplicable failure of a GPS system to give sufficiently accurate positioning. The math of the quantum double well was not solved in order to figure out how the laser worked. Nobody worked out the theory of X-ray crystallography so they could figure out what the HIV protease looked like to explain how the engineers came up in some ad hoc way with these marvelous new protease-inhibitor drugs that turned AIDS from a prompt death sentence of a chronic and somewhat manageable disease in the 1990s.
Oh Carl, you are such a charmer! Let's go through your examples:
1: Transistors and quantum... ok... Quantum wasn't worked on to solve much of anything of practical use that I can tell. What's your point?
2: Genetic engineering is one of mankind's oldest tricks. You might have met a dog, once. Perhaps seen a horse, or corn. We have been shaping animals and plants since time immemorial, literally. Clumsily, yes, but it didn't start because DNA was figured out.
3: Relativity was one of those things pretty far ahead of its time, that is true, positing ideas that would have to wait a long time to be tested.
4: Would they have thought about the double without the laser?
5: Ok, this one... I don't even know where to start with this sentence. Sure, let's give you this one.
So you have say 3 of 5 examples, all in medicine or theoretic physics. Hooray! Medicine isn't a bad one for your case, as much of the basic research does tend be of the shape "What happens if I we stick X into Y?" It also tends to be of the shape "Huh, X doesn't react to Y the same way Z does... why is that?" which isn't so great for the case.
Now let's look at everything that isn't physics or medicine.
I think this is a very rose tinted view of academia and doesn't jive with how any of the companies I've worked with view R&D. Nobody is looking at published literature to inform where engineering effort should be spent.
The academic welfare complex looks more like Afghanistan where the goal is endless expenditure, not replication, applicibility, or the truth. The members of this welfare axis includes journals, universities, and media who respond to their own incentives of attracting eyeballs and prestige. Industry doesn't need any of these actors to survive and thrive.
Kind of depends on what the companies for which you've worked are doing. If, for example, they were at all involved in drug development, or medicine in general, or biotech or advanced materials, or pushing the envelope in chip design, or chemical engineering, then this would be a deeply silly attitude and they would not succeed.
For other areas of the economy, sure, the role of basic research is so far removed it makes no sense at all to pay close attention -- you'll hear about what's important 20-50 years after it's done. For still other areas, it would be nuts because the academic sector in question is corrupt or useless.
Perhaps the companies at which you personally have worked are not a representative sample of all industries?
You might find that your example industries are a very small sample of industries as well... you might be taking behaviors from that small chunk and misapplying them elsewhere.
Absolutely lots of academia is terrible and lots of areas of industry are poorly served by academia. But to imply that means no useful work in {computer science, engineering, physics, chem, biology} and literally thousands of sub fields ... I just don’t get it at all. What? Go check out the latest issue in any journal, or look at the history of literally any popular technique
Like, crispr cas9. Industry ... benefits from it. Discovered by “ In 2012 Jennifer Doudna and Emmanuelle Charpentier published their finding that CRISPR-Cas9 could be programmed with RNA to edit genomic DNA, now considered one of the most significant discoveries in the history of biology”. Jennifer was at Berkeley and Charpentier was at (I think) Ümea Unciersity in Sweden. Jennifer actually took leave from Berkeley to lead research at Genentech two years before she discovered the use of CRISPR at Berkeley, but left after two months.
And I randomly picked CRISPR.
Another random one:
Knoll was born in Wiesbaden and studied in Munich and at the Technical University of Berlin, where he obtained his doctorate in the Institute for High Voltage Technology. In 1927 he became the leader of the electron research group there, where he and his co-worker, Ernst Ruska, invented the *****electron microscope*****in 1931.[1] In April 1932, Knoll joined Telefunken in Berlin to do developmental work in the field of television design. He was also a private lecturer in Berlin.
The particularly stupid thing here is clearly there’s massive working together and collaboration as transfer of ideas and people from academia to industry. And it seems to work well. So I genuinely don’t know how one could say one makes discoveries at the expense of the ofhe.
Also all that completely ignores the fact that universities educate most of the people who then go on to *do* industry research and many of the people who go to industry take with them discoveries from universities!
And TOTALLY IGNORING BASIC FUNDAMENTAL DISCOVERIES the hundreds of millions of papers that are available and published absolutely inform industry. And everyone else. I mean, go to literally the next Scott article, or the one after that, he cites dozens of published papers he clearly sees as worth reading. The blog has probably cited thousands of published papers for the past ten years. As have many other blogs.
Goodness, no. To take only the most salient example of the forest of electronics in your life, *all* of that is reliant on the experiments on the structure of the atom at the turn of the 20th century -- all done in universities, in areas of pure research that appeared at the time to have no conceivable commercial value -- followed by the immense discoveries in theoretical physics in the 1920s and 1930s, which, again, appeared to have absolutely no practical importance at the time. It was only another 10 years after that, in the 40s, when people like Bell Labs got involved and started wondering whether it was possible to use these ideas to build a "valve" a heck of a lot smaller than vacuum triode...and then the good folks at Fairchild thought, gee, we could put a bunch of these together on the same piece of silicon...
It's certainly a good long way from the basic research through the applied R&D to the commercial product, but just because the originating work is well in the rear-view mirror doesn't mean it wasn't absolutely key to the whole process in the first place.
That doesn't mean *no* basic research is done in industrial settings, but alas the amount that actually is, these days, is a tiny fraction of what there used to be. (BTW research in improved use of neural networks, what Google calls "AI research", is by my definition very applied. They're not inventing entirely new ways of mimicking the human mind over there, just applying the model already known for decades.)
I should add most people in academia deplore the decline of the big industrial research lab -- the heyday of Bell Labs or IBM Yorktown Heights or even Exxon Annandale. These places are not what they used to be, and it's very unfortunate, as there was a lot of useful cross-fertilization, because we all used to go to the same conferences, read the same journals, talk to each other, and the differering fundamental viewpoints brought a lot of life to the discussions.
I should also add that in biology it's still probably more the case that a lot of basic research goes on in commercial settings, because of Big Pharma. But even there...I feel like there's a tendency to outsource that to little start-ups that they can buy (or not) once they have something promising.
So, still, overall, I feel like there is something screwy with our general social environment that makes original research in the commercial setting less viable than it used to be -- this is a loss.
I very much agree with this and the prior post.
Much of neural network innovation is just making new chips and distributed architectures that can together train at 10^aaaaa floating point operations per second. “ GPT-3 175B model required 3.14E23 FLOPS of computing for training.” Which, lol that’s half Avogadro’s number. And your post is better argued than mine!
I don't think the industry thing is as true as you think. There's a pipeline from academic research to industry. You figure something out in a grant funded academic context, and then you found a startup on the side, or quit to found a startup, or look for an industry partner to license your stuff.
My university has an office set up to make these sort of licensing deals. The creators get a slice, the university gets a slice and the licensing office is funded by a slice. I'm pretty sure this is common. Probably a lot of stuff that looks like it's industry sourced started on a campus somewhere.
Why would you want to boost your trust level? I don't trust a damn thing anyone says just because of the letters before or after his name, or even how famous he is. If it's important to me, I read the paper, and it better convince me on that basis alone, from its internal data and argument, even if I didn't know who wrote it or where it was published. If it's *really* important to me, I better be able to replicate its key results myself. Those are the only bases for trust when you're an empiricist, which is what I am. Trust and $3 gets you a cup of coffee. It has no serious role at all to play in science.
It's worth observing that George Floyd's death was videotaped and VERY extreme. You can imagine a police officer, in the heat of the moment, accidentally shooting a man he believed was a threat, making an honest mistake. You can't imagine kneeling on somebody's neck for several minutes while he begs for his mother to be an honest mistake. Comparing that to some bad experimental charity ad is sort of like comparing The Godfather to a home movie I made in elementary school. One is going to have a bigger impact than the other.
The video of Philando Castile getting shot seemed more egregious to me, but that cop was acquitted. Since the video also showed Floyd insisting he couldn't breathe when nobody was touching him and he was just sitting in the back of car asking to be let out so he could lie down, it would be easy for a cop to dismiss his subsequent claims that he couldn't breathe as not being due to their actions either.
I agree about Castile, but there's weird shit about gun ownership.
<blockquote> it would be easy for a cop to dismiss his subsequent claims that he couldn't breathe as not being due to their actions </blockquote>
The fact that Floyd was already struggling to breathe doesn't exonerate Chauvin in the slightest. That's literally the worst time to place your knee on someone's neck (I'm no doctor, but...). Chauvin knew he was likely killing Floyd.
I think the standard is "should have known" given that he was convicted of unintentional murder.
My 9th grade (I think) math teacher spent a bit of time with us on SAT prep. Back then (the late 80s) the SAT was all multiple choice. Right answers counted for 1 point and wrong answers counted for some fraction of a negative point. If you straight up guessed randomly for each question you'd do worse than simply not answering. But what my math teacher pointed out was that if you could eliminate 1 possible answer and _then_ guess, you'd overall increase your score, at least statistically. I took this to heart because it was mathematically obvious, but I wonder how many other students in my class did the same.
(Note that I might be misremembering the exact details. Maybe you had to eliminate two answers and then guess? It's been a while!)
The SAT actually removed the penalty for wrong answers in 2016.
Before that you got +1 for a right answer and -0.25 for a wrong answer. That means that with 5 options, your expected value of guessing is 1/5 * 1 + 3/5 * -0.25 = 0. As soon as you can eliminate 1 answer, then the expected value is 1/4 *1 - 3/4 * 0.25 = 0.0625 which means guessing out of the 4 questions has positive expected value.
Your details are right. Pure guessing was designed to just be nothing.
Other people had to be begged to take your teacher's advice to heart; they were terrified of losing a point.
Just getting rid of the penalty entirely was probably the right move. The only place it'll make a difference is someone who runs out of time and doesn't have time to fill in every remaining test question with a "C".
At a tangent ... my approach to this issue, as a teacher giving non-multiple choice exams, was that a wrong answer got no credit, leaving the question blank or writing "I don't know" got 20%. That was mainly to discourage students from wasting both their time and mine bluffing, trying to sound as if they sort of knew the answer when they knew they didn't.
Forgive my ignorance, but isn't loss aversion just another way of saying that money has marginal utility? If I have $10k in the bank and need it for say a car down payment next month, then losing $10,000 is going to be way more painful than the gain of winning an additional $15k. Isn't that obvious? Haven't economists known about marginal utility since the marginal revolution or am I missing something?
As for the nudging example, I don't know why economists would be surprised to learn that incentives matter. If you reward people to do x, they're more likely to do x. Is that behavioral economics or just economics?
The work on framing of choices for e.g. how much to tip is interesting, although having worked in marketing, specifically in the role of trying to optimize websites for profitability, the idea of framing prices is very well known, although I don't know if marketers or economists figured it out first. There are many (infinite?) variables on a page and they can all have some effect on the rate of people who complete some action, and most tech companies with a lot of traffic have a formal A/B testing program to figure out the best possible configuration of elements on a page or in a sequence of pages.
I suppose that economists could point out various instances of "irrationality" in customer behavior, like Scott's tipping example, but sometimes the rational choice is to just follow the usual pattern and go with the flow.
This feels like a case of economists assuming (incorrectly) that people are only optimizing for money, which seems silly. When tipping, for example, part of the equation is "how much time should I spend thinking about this?" but also "what will the driver think of me if I tip x".
There's an additional factor, which is "how much is it culturally appropriate to tip for this kind of service?" A lot of people, I suspect, will take their cue on this last question from the options presented to them by an app, which in some cases can lead to some odd things like tipping the Uber driver more than the Grubhub driver. But I dunno, saying "the way options are presented to people will affect which option they choose" just seems like something that's rather obvious.
"Forgive my ignorance, but isn't loss aversion just another way of saying that money has marginal utility? If I have $10k in the bank and need it for say a car down payment next month, then losing $10,000 is going to be way more painful than the gain of winning an additional $15k. Isn't that obvious?"
And what about losing vs. gaining $5, when you have $10k in the bank? The marginal utility doesn't change much between $9995 and $10,005.
Withdrawing from the bank takes time!
If the marginal utility of money is logarithmic, or more generally if the second derivative is negative, then you'll still see the same effect.
Yes, but the effect is tiny between $9995 and $10,005: ln(10005/9995) = 0.001. Or to use the example Scott mentioned, if you're a millionaire faced with gaining or losing $20, that's ln(1,000,020 / 999,999,880) =4e-5. If you still see loss aversion that's stronger than that, and the studies do, the logarithmic utility function can't explain it.
You don't get to be a millionaire, on average, by gambling $20 here there and everywhere. Gambling aversion is probably strongly positively correlated with net worth.
Loss aversion is very poorly named, because it doesn't actually refer to a generalized fear of losses. The standard expected utility model accounts for the fact that different people and entities have different levels of aversion to *risk*, for exactly the reason you mention. So the standard theory already easily explains that there will be plenty of people who chose $0 with certainty over a 50/50 chance of -$40/+$60. Even though it has a positive expected*value*, it may not have a positive expected *utility* depending on how quickly your marginal utility of money declines. That is indeed obvious which is why the standard theory already incorporates it.
To have loss aversion, you have to be more averse to losses than is explained by risk. So to do those experiments, you have to confront the subjects with different choices that have the same level of risk, but have different mixtures of wins and losses and see if they behave inconsistently.
In any case, like most of these experiments, the experimental set up is pretty artificial and the choices on the real world are rarely irrational. For example, people generally have locked in levels of expenses in the short run but freedom to do whatever they want with windfall gains, so the consequences of losses and gains aren't symmetrical. There aren't very many situations in life analogous to someone handing you $50 and then offering you a bet with lower losses.
Thank you, this clarified my confusion about loss aversion; very helpful!
>You didn’t need Kahneman and Tversky to tell you that people sometimes make irrational decisions...
I dunno, maybe I do.
It's always seemed to me that quite a lot of what people describe as irrationality can also be described as cases where heuristics that roughly approximate rationality happen to conflict with more careful reasoning- like how loss aversion of large sums roughly approximates Kelly betting, or how that famous study about people deferring to an apparent group consensus about which of two lines was longer roughly approximates outside view reasoning.
It also seems to me that the question of when a person should rely on these heuristics may be more complicated then just "use careful reasoning when precision is required and time allows, and rely on heuristics otherwise". When a lot of people rely on the same heuristics, it's pretty easy to see what sort of risks those heuristics entail- but when you act on your explicit reasoning, you're inventing something new, and the risk of relying on that can be harder to judge. Sometimes, you know that a heuristic is particularly risky in a given situation, so ignoring that in favor of a reasoned alternative is the obvious choice. Other times, you can see that relying on a heuristic is very low-risk, so that while relying instead on careful reasoning might lead to a better outcome, doing so involves taking on more risk. An astronaut who follows a checklist is safer than one who invents their own procedures, even though the checklist is only a rough approximation of the ideal way to fly a spaceship.
So if it can sometimes be the correct instrumental choice to rely on a heuristic over reason, does it make sense to single out any negative consequences of that choice and say that they're the result of "irrationality"?
To some extent, of course, that's just a semantic question- but I think our ordinary use of the word can lead to real confusion. Person A might say "I've reasoned carefully about this situation, and my conclusion conflicts with your intuitive judgement, so I think your judgement is irrational." And then Person B might be like "I'm relying on an empirically very reliable rule of thumb, and I believe I have good reason to trust that more than your or my understanding of the particulars, so I think my judgement is not irrational." A and B might believe they have a disagreement, when in fact they're just each using the word "irrational" to describe different things.
So, that's the issue I have with "people sometimes make irrational decisions". I'm not entirely convinced that "irrational" as it's commonly understood is a natural category- it seems to conflate things as dissimilar as mistakes and the negative consequences of useful heuristics, and imply a non-existent common cause. I think we may need a completely different framework that would consign our current use of that word to the same bin as "phlogiston" and the four humors.
You’re right that “irrational” isn’t really a thing, and even “heuristics” might just be a case of “making a mistake a few times the understanding the situation better afterwards”, which isn’t a sign of global irrationalism at all
Well said. But I dunno if the best approach is inventing a new psycho-econo-babble vocabulary word, maybe just have a lot of footnotes the first few times you use the word "irrational." It's a little easier to learn a new connotation for an existing word than an entirely new word...er...at least it is at my age ha ha.
Regarding George Floyd vs. Mary the Single Mother, the confounding factor could be media saturation. Mary the Single Mother is some anonymous woman from a stock photo. George Floyd is a character backed by seemingly endless media campaigns propagated by organizations with deep pockets and vast amounts of man-hours to spare. It's only natural that Mary would pale in comparison.
It's worth remembering that the George Floyd incident was preceded by a number of other carefully amplified incidents designed to inflame racial tensions.
Two weeks prior it was Ahmaud Aubery, the "jogger" who was shot by some neighbourhood watch types.
One week prior it was a slow news week so it was that argument about that dog in Central Park.
If George Floyd hadn't started the riots, then another cause celebre would have been found. the following week.
The suggested tipping on credit card transactions has become so insane, that I have switched to paying in cash.
I noticed the other day that the scale at one of my favorite places ran up to 35%. I have always been a generous tipper; I figure that for very small orders (a single aperitif before dinner at another restaurant, a coffee, etc.) 30% was appropriate, and I don’t tip less than 20% for restaurants unless the service is quite poor.
But 35% is crazy. I have a lot of waiter friends, and I know how much they make. It’s like $25 to $35 an hour, and they don’t pay taxes. Bartenders make even more, most places. I’m not sure what’s going on there, but I will sooner stop dining out than pay 35%.
I would keep my Denver quarter and never exchange it for a Philadelphia quarter, but this is simply because Denver is a vastly higher-quality city than Philadelphia. If I had been given a Philadelphia quarter initially, I would've leapt at the chance to exchange it for a Denver quarter.
D coins are a dime a dozen, which is really saying something for quarters. :p
P coins are rare. But only coin collectors even know coins have a mint mark on them.
I didn't think Philly coins had any mint mark.
They do; the Denver D is replaced with a P.
https://en.wikipedia.org/wiki/Mint_mark#United_States_Mint_Marks
Strange, I had exactly the same reaction, even though I’ve never been to Denver, and only been to Philly’s airport on connecting flights.
Upon examining my “reasoning”, I think it’s because I’d just rather live in Denver, if forced to make the choice in some BE experiment, or in a supervillain’s lab with a gun to my head.
> It sure seems people cared a lot when George Floyd (and Trayvon Martin, and Michael Brown, and…) got victimized. There are lots of statistics, like “US police kill about 1000 people a year, and about 10 of those are black, unarmed, and not in the process of doing anything unsympathetic like charging at the cops”.
This is a really weird example statistic to provide right after postulating that Michael Brown "got victimized".
> Somewhere there’s an answer to the George Floyd vs. Mary The Single Mother problem.
My take is basically that George Floyd was picked as the mascot for something that was happening anyway. His causal effect is something near zero - rather, people who were looking for something happened to find him.
My explanation for George v. Mary: the Mary ad is "You should help her." The implied George ad is "you should help all possible future Georges." They don't seem at all commensurate.
The implied George ad is more like "Take revenge on your outgroup for what they've done to a member of your ingroup", which seems even further from the Mary situation.
People love to attack their outgroup and are always looking for an excuse to do so, people hate putting money into an envelope and sending it to charity and are always looking for an excuse not to do so.
"G&R are happy to admit that in many, many cases, people behave in loss-averse ways, including most of the classic examples given by Kahneman and Tversky. They just think that this is because of other cognitive biases, not a specific cognitive bias called “loss aversion”. They especially emphasize Status Quo Bias and the Endowment Effect."
As Scott implied, these seem more like explanations of the mechanism behind loss aversion than a refutation of loss aversion. It's like if someone observed a rainbow, and someone else explained the rainbow as the result of refraction of light by water droplets. The physical explanation is a confirmation of the existence of rainbows, not a refutation!
It's not quite this innocent, because some people use loss aversion as an explanation, as if it had causal power, which the new evidence suggests might be wrong. Prospect theory also leans pretty heavily into a form of loss aversion that isn't just other stuff.
For a metaphor which I think is really on the mark, but couldn't bring myself to put in the main post, see my old essay Against Murderism https://slatestarcodex.com/2017/06/21/against-murderism/ , which makes the same point about whether racism is causally real or epiphenomenal. I think this makes it clear that there's an important difference.
A big chunk of what behavioral economists call "loss aversion" is probably what normal economists call "marginal utility". As you alluded to at one point, the college students who are asked about 50/50 chances to win $60 or lose $40 are highly likely to be thinking something like "If I win $60 I get to eat a nicer meal or two, but if I lose $40 I can't fill up my gas tank this week".
Put more generally, the value of your last $100 is considerably higher than the value of your 10,001-10,100th dollars, and loss aversion includes that factor.
Given the result with millionaires and loss aversion down to $20 I'm guessing there's some part of loss aversion that is separate from marginal utility, but it's at least a big chunk of it.
Or possibly there's just a high overlap between people who don't like losing or spending money and people who have high net worth.
I definitely think that there's overlap between wealthy people (as distinct from high income!) and a dislike of spending or losing money. I'm curious if studies on billionaires would show much greater risk tolerance though - a millionaire is usually someone who scrimped and saved, a billionaire is always someone who gambled big and won.
Makes intuitive sense.
This is an issue with Scott's example and description. What you are describing is *risk aversion*, which is generated by a declining marginal utility of money. And that is definitely a part of standard expected utility.
To have loss aversion, you have to be more sensitive to bets of *equivalent risk* that involve different levels of wins and losses.
I was going to say the same thing. Risk aversion is standard Neoclassical economics. Strictly preferring not to take fair bets can be described by maximizing a diminishing marginal utility function over income or consumption.
For loss aversion to be distinguishable more than a kink in the utility function requires that the preferences change when the reference point (the point that defined what is a loss) changes.
This is definitely some of it, but loss aversion partisans (including the Mrkva study I linked) argue that there's more, and that for example millionaires display loss aversion on bets of $20, which it's hard to argue is a marginal-utility effect.
It would require some very strange assumptions over the curvature of the utility function, indeed. Not strictly evidence rejecting the classical axioms, but very plausible.
But I think the complaint/suggestion is terminology. "Risk aversion" per se is not a departure from the Neoclassical axioms, so it shouldn't be considered part of behavioral economics.
I believe the original reference for this idea is Friedman and Savage, 1948.
Here is an interesting Tyler Cowen meditation on their idea:
https://marginalrevolution.com/marginalrevolution/2006/11/the_friedmansav.html
Not "marginal utility" but "declining marginal utility of income." The loss aversion claim is that the effect exists even for very small amounts, which shouldn't change marginal utility of income significantly.
FWIW I am a strong believer that basically all food delivery people (outside of maybe for a party?) do roughly the same job so I make it a point to always tip the same amount without any consideration for the cost of the food I ordered. This amount used to be 3 dollars but I recently increased it to 4 dollars since it's been about ten years since I started ordering delivery food and I figured I should try to keep pace with inflation. With the newer food delivery services though the service range is often very large so I will tip a bit more if the estimated drive time gets above 20 minutes.
By that reasoning, when you go to a store to buy something, the store's profit should be mostly the same for each item (maybe a little more if the more expensive item requires more storage space). Nobody believes this.
A tip in theory is a bonus based on good service, the idea of tipping delivery people started on the bases that they would delivery the food more quickly but as it became standard practice it got baked into their compensation. I think a more analogous example would be paying employees the same wage for doing the same job which I think is a broadly popular opinion, vehemently so in the case of women. Honestly your analogy makes no sense to me. It is not generally considered that the store is providing me with a 'having a shelf for goods to sit on' service that I am expected to pay them for, and nobody tips the 'service' employees working in stores, for some reason we are perfectly happy assuming that they should just always provide excellent service.
The thrust of my original post is that in restaurants a 20% tip in theory scales because at more expensive restaurants you are getting a higher quality of service, I don't really buy into this but it is at least reasonable. So when I eat a $100 dollar dinner the waiter makes $20 dollars, on the other hand if I get a $10 dollar dinner the waiter is looking at $2 dollars, was the first waiter really working 10 times as hard/well, probably not, but I will bite the bullet on that social norm. The Uber Eats delivery person putting a $100 dollar bag of food in their car and driving it to my house is doing a job that is 100% indistinguishable from the Uber Eats delivery person putting a $10 dollar bag of food in their car and driving it to my house, so I tip them the same.
Mary the single mother vs american single mothers just looks like scope insensitivity to me? Admittedly I don't know how large an effect you usually see from that when comparing options side to side. Or is scope insensitivity also fake?
I don't think it's scope insensitivity because you're donating $X to charity either way, which will presumably go to some single-mother-related cause. I'm not sure anyone expects their donation to actually go to Mary personally. Also, I don't think the ad was making the claim that your $X would actually help all single mothers in America to the same degree it would help Mary personally.
On identifiable victims: sometimes I write about a big problem and it feels stronger (emotionally) to write in detail about a single instance so the reader can make a personal connection, and sometimes it feels stronger to write with big numbers to emphasize the problem’s scope. They’re different tools that do different things, as I’ve learned in every writing class since high school.
It seems insane to me that anybody would summarize a distinction like this as either “there is a constant ‘effect size’ that makes personal connections stronger than numbers” or “the effect doesn’t exist, so they’re both the same”.
Kind of on the same level as asking whether making right turns or left turns got drivers closer to their destination — it may be a fun anecdote if one turns out to be more common, but if you really want to learn something the heterogeneity is the whole point.
Yeah this is the important point. You can’t just do a regression on some survey and call that a global bias number, because that bias isn’t real
I do recall reading some study that used a narrative about endangered geese in Russia (?), which claimed that the amount people were willing to spend was completely independent of the number of geese that would be saved - they just saw "big number" and it didn't matter if it went up or down by a factor of 10. Wish I could find the link...
On the fun anecdote level, I believe transport companies actively plan for only easy turns (right in the USA, left in the commonwealth) in their routing, because three turns that you can do at a red is faster than waiting for one tough turn, so if you measure "closer" by time then there really is a huge difference between right and left turns, especially for unwieldy vehicles.
The state of New Jersey took this to a level of art with the jug handles that eliminate left turns completely.
Jug handles explicitly trade off space (they require more space than a "Conventional" left turn from the center lane) for safety. (Then they compromise the safety aspect with some space-saving designs).
Sort of. They require more space at the intersection, but they allow for narrower roads because there is never a need for left turn lanes.
The space lost to a jughandle is always more than the space gained by not having a turn lane; and it's "more valuable" land, in that an additional lane takes a chunk of frontage, but a jughandle takes land from the intersection corners and much further "back" from the road.
Jughandles are (usually) functionally at-grade (sometimes partial, sometimes full) cloverleaf interchanges; and rarely other sorts of highway interchanges.
(I live in NJ, and as an immigrant to NJ from another state, jughandles are one of the things I wish was not NJ-specific)
Typo? Kahneman and Tversky just sort of threw all all this stuff ... [should it read as "threw away all this stuff"?]
I think it's interesting that you miss a possible explanation for your medical school friends' behaviour – honesty. The 'don't know' option is clearly there to emphasise that it is better, as a doctor, to admit you don't know something than to guess. They didn't align the scores correctly for that, as it's a secondary goal of the test, but I think many if not most people would understand the point and be reluctant to guess wrongly as it has a certain unethical feeling – it certainly does to me.
The idea that the test was just an optimization problem for point scoring and did not have any ethical ramifications can be a blind spot of behavioural economics and rationalism in general. You try to dismiss this with 'the average medical student would sell their soul for 7.5% higher grades' but I don't think they would.
I discuss this on the linked post. Nobody refused to guess during a straight multiple choice test.
I read the linked post and loved it. But the comparison with straight multiple choice isn't quite fair, I don't think. The most impeccably honest person isn't adding any scruple of honesty by leaving a multiple choice question blank. That only decreases the expected correctness! But if there's an explicit "I don't know" option...
Come to think of it, maybe the inconsistency is in failing to answer "I don't know" for literally every question! Probabilities are strictly between 0 and 1 after all.
Not that I'm seriously arguing against the point. Of course one should never answer "I don't know" under that scoring system.
You touch on it in the linked post, when your friend says the test is supposed to measure knowledge, but you still consider his primary motivation loss aversion, with the moral aspect as a rationalisation. If it's loss aversion, both of you see the test as a maximisation game, but his loss aversion causes him to be unable to rate wrong guesses properly, which he explains post facto with a moral argument.
I would suggest that the inclusion 'don't know' creates an ethical or social dimension not present in the other test, and that most people wouldn't see it as just a points-maximisation game. I would happily guess with no 'don't know' option, but not with one – you can call this 'irrational', but it seems analogous for people who consider many situations money-maximisation games and think moral people are acting irrationally, but many would not consider that laudable.
I guess I see this as more thoroughly fleshing out the rationalization. You're making fine points and it prevents us from proving that loss aversion is the reason for the failure to optimize for points. But, well, to quote a key paragraph from Scott's original post:
<blockquote>
I had people tell me there must be some flaw in my math. I had people tell me that math doesn't always map to the real world. I had people tell me that no, I didn't understand, they *really* didn't have any *idea* of the answer to that one question. I had people tell me they were so baffled by the test that they expected to consistently get significantly more than fifty percent of the (true or false!) questions they guessed on wrong. I had people tell me that although yes, in on the average they would do better, there was always the possibility that by chance alone they would get all thirty of the questions they guessed on wrong and end up at a huge disadvantage.
</blockquote>
I guess it seems clear enough that it was the fear of losing points that was distorting their reasoning. Especially if Scott is right that the students didn't change their strategy *at all* in light of the proof of the optimality of guessing. If it were scruples at play, they'd be like "ok, as long as I have some inkling, I'll guess; but it feels dishonest to not choose 'I don't know' when I really have no clue".
I was thinking this too. If someone told me directly 'Guessing on questions you don't know the answers to definitely has a better change of giving you higher marks' then I might be tempted to guess, but otherwise I'd feel a bit slimy even trying to do that calculation (to figure out if it was worth it to guess). And maybe Scott's friends are more scrupulous than I am.
"Relatedly, Uber makes $10 billion in yearly revenue." Cory Doctorows says it's less, https://pluralistic.net/2021/08/10/unter/#bezzle-no-more
I'm discussing revenue. I think Cory is discussing profit.
1. A nonlinear utility function for wealth is often perfectly rational. The mathematical solution to how to choose the sizes of positive-expectation bets so as to grow your bankroll as quickly as possible in the long run, is called the Kelly criterion. It is based on a logarithmic utility function. You choose bets so as to maximize E(log(wealth)).
2. I think the answer to the George Floyd question is the viral video. If instead of a video it was a text summary in some local Minneapolis newspaper, we would probably not have heard of George Floyd.
3. My AP physics teacher was adamant that there is no such thing as centrifugal force. Instead there is centripetal force acting against inertia.
(I would gladly trade 10% of my physics/chemistry/math ability for 10% of Scott's writing ability)
The George-vs-Mary question is like asking why some gofundmes are successful and why some aren't, which mostly seems to be a question of "do you have a social network that's willing and able to fight for you and recruit more help (which will in turn recruit more help, and so on), or don't you?" A strong cause can help expand the social graph, but it is neither necessary nor sufficient (even life-or-death gofundmes have been known to fail).
That's a plausible claim, though I think the two Kickstarters I know of which did extremely well (the Reaper figurines and the Coyote and Crow game) had clever incentives in the first case and an usually attractive product in the second.
The one I was close to that failed (Terra Memorialis, which was for a business to set up 3D online memorial rooms) had a weird product which was hard to describe briefly. Not that hard, but it's not like people already knowing a lot about gaming.
Still, you may well be right about typical GoFundMes and Kickstarters.
I enjoyed this article but one thing stood out.
"Whoever decided on that grocery gift card scheme was nudging, whether or not they have an economics degree - and apparently they were pretty good at it. "
I disagree! They weren't nudging at all! They were applying standard neoclassical economics to pay someone to do something they wouldn't otherwise do.
Thaler and Sunstein describe a nudge this way (emphasis added):
A nudge, as we will use the term, is any aspect of the choice architecture that alters people's behavior in a predictable way *without* forbidding any options or *significantly changing their economic incentives.*
Paying people (with cash or groceries) to get vaccinated is significantly changing their economic incentives.
And in fact, one standard behavioural economic concept (crowding out intrinsic motivation) might suggest that one shouldn't pay for vaccinations, as then you crowd out the intrinsic motivation (we owe other citizens a duty to be vaccinated) with cash/groceries.
This mistake is made all the time - people describe standard economic analysis as behavioural economics, or "nudges".
I'll give you some examples in vaccination that I think *would* qualify as nudges:
- instead of paying 10 000 people $10 each to get vaccinated, run a lottery and pay one person drawn at random $100 000
- Instead of asking people to opt in to a vaccine, ask them to opt out.
- send them a letter telling them how many people on their street have been vaccinated (if it's a lot. If it's not... maybe don't).
Even then I don’t think “nudge” has anything meaningful about it because “people really like lotteries” and “public shaming” probably aren’t usefully understood as having any sort of mechanisms in common
your lottery example seems odd in that it is still a concrete economic incentive, the (monetary) EV is unchanged....
The problem is a confusion between the meaning of "nudge," which is a small push, and the way it is used in the book _Nudges_, which is the use of choice architecture to affect individual choices. Only the latter involves behavioral economics.
Regarding Galenter and Pliner, what they focus on is the *exponent* of the curves, and they find that the *exponent* of losses is larger than the *exponent* of gains. Scott summarizes this as "loss aversion", but this is not what it means.
What does an exponent tell you? Let us take exponent 1 versus 2 for concreteness. So the curve for gains looks rather like f(x) = const*x, while the curve for losses looks rather like g(x) = const2 * x^2.
Loss aversion would tell you that f(x) < g(x). But this is a completely different question, and the exponents don't tell us this. If you want to know whether f(1$) < g(1$), then the thing that matters are the two constants const and const2. This is exactly the thing that the exponent analysis tried hard to remove from the picture, because the study wanted to know the exponents!
What exponents do tell us (if the results are scalable to a wide range), it is that we have f(x) < g(x) for *very large* values of x, because a quadratic curve grows faster than a linear one, and at some point the constants will not matter anymore.
Going back to Hreha, he claims
"...the early studies of utility functions have shown that while very large losses are overweighted, smaller losses are often not."
I don't know whether this claim is true or not, but it is absolutely compatible with the exponents found in G&P. In fact, Hreha's claim *requires* that the exponent of losses is larger than the one for gains. (Even more, if the curves were truly of the form const * x and const2 * x^2, then it would imply Hreha's claim, because the function const2 * x^2 is smaller than const * x for very small positive x. But this would require that the fit is accurate for a specific narrow range of x, and that's probably not what the fit was optimized for.)
I buy Scott's analysis overall. The above is a subtle point, which probably got lost at some stage of iterated citations, and apparently it was not important for Kahnemann and Tverski anyway. But in this detail, Scott's analysis is wrong.
Wouldn’t diminishing marginal utility fully explain the differences for large and small x?
Yes, absolutely. Gains have *de*creasing marginal utility, while losses have *in*creasing marginal utility (as I lose more and more, every additional dollar becomes more painful). So it makes sense that gains have a small exponent (smaller than one), while losses have large exponents (larger than one).
But that is all compatible with Hreha's claim and with the G&P paper. While Scott was arguing that the G&P paper would contradict Hreha's claims, and was confused why Hreha cites it as support.
For someone so determined to expose loss aversion as pseudo-science, Hreha was coming across as kind of anti-science. I couldn't quite put the finger on how until I got to
"In my experience, creative solutions that are tailor-made for the situation at hand *always* perform better than generic solutions based on one study or another."
Which, sure: common-sense N=1 ideas that you can't really test but "come on we all know it works" may be the right strategy sometimes (something something seeing like a state). But it's not exactly a moral high ground to demand extreme rigor on others, specially when at least they are trying.
Great review; thanks. My experience (and decent evidence in the literature) suggest that specific BE strategies can be very effective when there is a gap between intention and behavior. For example, people may *intend* to save for retirement but never get around to doing it. In such situations, switching from an opt-in to an opt-out approach has been proven to activate latent demand for such savings. Active choice -- stopping people in the middle a process and requiring them to state their preference -- can also be effective. (My team used this in a healthcare setting to substantial effect, helping increase the number of patients receiving medications for chronic conditions via home delivery.)
One challenge with these two strategies is that there is no free lunch; unlocking latent demand requires a lot of backend rework to make things easy and automatic for the consumer. In addition, they are counterproductive if there is no latent demand; you're just creating friction for your customers.
But all of this is to say that some elements of BE / choice architecture are alive and well, and their effectiveness is not easily explained by classical economic theory.
“Behavioral economics” as a set of mysteries that need to be explained is as real as it ever was. You didn’t need Kahneman and Tversky to tell you that people sometimes make irrational decisions, and you don’t need me to tell you that people making irrational decisions hasn’t “failed to replicate”.
To be even more precise, is that people are irrational in somewhat predictable ways. If they were just irrational, you'd expect as many behaviours/responses as there are people/possibilities but, in many experiments/problems, that's not what we're seeing. People make sub optimal/irrational decisions in a way we can predict/exploit...
See https://www.investopedia.com/terms/m/mondayeffect.asp for a simple example.
To whatever extent these folk stock market trends are real, which most aren’t IMO (lol technical analysis), aren’t they driven by transient characteristics of the underlying market or self perpetuating ideas driving buying patterns, and not “cognitive biases” that are at all consistent or predictable? Which is what I think is true of all of this - “irrational” decisions are just decisions, people are dumb sometimes, but they’ll correct over time sometimes and their mistakes are closely and arbitrarily driven by the problems details as opposed to general rules (even optical illusions are overcome instantly and don’t actually trick people in significant ways for more than a minute!!)
I happen to agree on technical analysis, though simple stuff (support, resistance, Fibonacci retracement) is often "correct" (self-fulfilling) but, if you want to see a more complex example of biais - trend following is an investment strategy (or family of strategies) that work reasonably well over time and shouldn't be possible at all if the rational random walk was an accurate description of markets.
You have tons of these behavioural issues in finance. Loss aversion is pretty real in that realm. Herding mentality (i.e. chasing something b/c everyone is doing it) is also common, though I kind of see that as "rational" given career risks.
I really don't think these issues are idiosyncratic or corrected over time. They're systematic, that's why you have successful CTAs.
They are idiosyncratic and corrected over time over a multi decade time scale and when they’re idiosyncratic to the entire US stock market, but not generally true for all actions taken by all people. But for a different type of market, for different people using it, different trading software, different underlying assets, there’d probably be different issues. The claim is that there’s some “systemic bias” that makes people do something, which is wrong - and people will stop doing the “biased thing” when that situation changes - people will stop tipping when not tipping isn’t seen as shameful or when too many companies use the tipping increase strategy, and that “bias” is more of a culturally and tactically local thing than it is anything about human nature. People trend follow because it works - but why does it work? Sure the market isn’t a random rational walk or w/e, but that’s not a cognitive bias, it’s a property of the way the markets and companies work and reveal information and grow.
Is it possible that, as people become aware of behavioral economic findings, they adjust their behavior and subsequent studies have a harder time replicating the originals?
I feel like a bit of a broken record always talking about COVID on this blog (I have other interests, I swear!), but this part seems disagreeable:
> Nobody believes studies anymore, which is fair .... There are about 90 million eligible Americans who haven’t gotten their COVID vaccine, and although some of them are hard-core conspiracy theorists, others are just lazy or nervous or feel safe already.
"lazy or nervous or feel safe already" is not the least charitable way to describe them but it seems pretty close. How about this rephrasing:
"There are about 90 million eligible Americans who haven't gotten their COVID vaccine, and although some of them believe that scientists are frequently and deliberately collaborating with each other to generate fake consensus about untrue claims, others just don't believe the studies that support the vaccines are reliable due to the normal array of scientific errors and biases."
Really, the bloody minded focus on "nudging" people into taking vaccines (more like shoving) is one of the best ways to create resistance to a proposal. The moment a government starts "nudging" people, you're basically taking a position that you're more rational than them and that no argument, regardless of how well phrased or debated, can possibly get the masses to do what is best for them, because they are stupid. But people don't like it when government officials imply the citizens are stupid and the officials are enlightened, partly because both world history and the present day have cases where that idea got taken too far, leading to lots of people ending up dead or in camps.
Governments should just be focusing on providing as much high quality, trustworthy data about vaccines as possible and then just leaving it there for people to study, poke at, and pick up on their own initiative (or not). Instead a few of them are openly talking about building various kinds of inside-out open air prisons in which ordinary doors and walls act as the fences, or even putting the unvaccinated under perma-lockdowns (i.e. house arrest). This simply says, "we can't win the arguments on their merits so we have to force you to comply", which in turn makes whatever they want come across as much more dangerous.
Nudging is bad, but the studies seem pretty good and also if the vaccines had problems wouldn’t we have noticed given 100Ms have taken them? Yeah people should rationally argue the issue, but that doesn’t seem to work too well for the holdouts (nudging doesn’t either though so idk)
Noticed how? We only really have three ways to notice things like this:
1. The media.
2. Government statistics.
3. Personal anecdotes.
None are especially reliable at the moment because too many people have bought into the belief that everyone has to take the vaccine no matter what, it's been moralized, become a matter of ideology, that undermining the rollout makes you a bad person etc.
Consider: the media is by and large refusing to report on vaccine deaths, e.g. they'll report a death of someone slightly famous but fail to mention that it was a heart attack in a healthy guy two weeks or less after they took the vaccine. I've seen several cases of this now. They would rather omit highly interesting detail that makes the story much more relevant, than do anything that might reduce uptake.
Government statistics: there's really only one dataset gathered on this, the databases governments run that collect reports. For example you can see a better rendering of the US one at openvaers.org. Those all show massive numbers of side effects and deaths, far more than any other vaccine programme in history, and are certainly under-counting by a lot because people are being told to expect nasty side effects. For instance nobody I know has bothered to report when the vaccine makes them feel really sick for a day or two, because that's a "known" side effect. I've even heard of doctors telling patients to expect Bell's Palsy! And when people do seem to be injured by the vaccine it's apparently quite common for doctors to tell them that it's all just a coincidence.
Personal anecdotes: you can't share them, it all gets erased from the internet as fast as possible, at least from US sites. Still, people screenshot them and there's a collection piling up here https://t.me/s/covidvaccineinjuries/ - I have no idea how credible those are, but there it is.
That leaves the studies. That's "Nobody believes studies anymore". You can say a medical RCT is a totally different level of study to most of the ones that get revealed to be unreliable, and indeed it is. But there are nonetheless doctors who assert that drug trials don't seem to reliably detect side effects. Sebastian Rushworth has written about this here:
https://sebastianrushworth.com/2021/07/19/do-drug-trials-underestimate-side-effects/
The summary is that the effect seems to be real but nobody knows exactly why trials don't seem to correlate that well with doctor's real world experience. One is probably exclusions. COVID trials excluded lots of people who are now being given it e.g. the very elderly, pregnant women. HIV+ patients were included and then excluded at the end in one trial without explanation, I think it was Pfizer. In some trials (of statins) it's alleged that the trials use a "run-in" period which excludes people who would have bad side effects and other things you'd think wouldn't be allowed.
So, I guess it's easy to assume that the system is working, is rational, scientifically based and is tuned to look for problems in the same way it would be in normal times. Problem is, I see no evidence of this. Instead what we see is mass hysteria on an a-historic scale in which vaccination has been defined as a quasi-religious issue. Look at this very blog post! It states that people not trusting scientific claims is fair, and then paints the unvaccinated as lazy scaredy cat conspiracy theorists. This is a topic on which even rationalists find it hard to be rational.
Ignoring “the media” ... “government statistics”, I e medical monitoring, are actually pretty good in the US - we regularly catch one in a million drug side effects and add warnings to them based on that. And we *did* catch few in million heart abnormality side effects for some vaccines! But didn’t any lather ones. It’s very good - the Scott over cautious FDA is very cautious, in many areas! VAERS is another thing - self reported cases for those with knowledge to look at - and self reported side effects are unreliable and random and might be increased by a sudden interest in vaccine side effects among a lot of people combined with them spamming the site all over social media.
Yes, the vaccine making you feel sick for a day or two is an intended side effect that all vaccines have. You being sick is body responding to the foreign vaccine proteins and cell death, which is intentional because it’s how the immune response can work and prepare for the actual virus! expecting Bell’s palsy - such side effects are common *among* most vaccine at very very low frequencies - individuals shouldn’t expect it, but it may happen. But at substantially loser rates in all cohorts than autoimmune from the virus itself lol. So that’s a bit of a misstatement I think. And the influencer who appeared to have guillian barre was I vaguely recall found fake. I’m not sure what sort of injuries are being reported?
Yeah. The original trials, with 40k people each, won’t find 5 in a million side effects. Nobody disagrees. The side effects comes when the drug is administered to millions of people who then have five with some side effect, at which point postmarketing surveillance regularly finds side effects, like it did for AZ JJ and thousands of drugs. And that does seem to work pretty well! Same for exclusions - yes, the trials excluded them. But if it’s administered, I don’t see why it wouldn’t be.
Because the blog has lots of scientists who read it who read all these papers and think “yeah the vaccine stuff checks out” and “oh boy social psychology does not lmao”
And FWIW I do suspect some vaccines have long term negative subtle weird effects because of the lipids or whatever. Thimerosal probably is bad. The microscopic dose of micro plastics and plastic chemical additives and synthesis side products is probably bad too. But that’s all quite small scale. The risk benefit of vaccines, which help prevent a LOT of disability and death, is rather clearly positive by really most standards. If you’re worried about plastic, all vaccines probably has 1000 or 1000000 times less than that in your food and the air you breathe, same for any other pill or food you eat for synthetic chemicals and side products. That’s compared to a x% chance of just dying, which is quite bad.
Well, rather than dive into a point-by-point back and forth, let me step back and re-focus on the meta-issue.
What you're saying here is basically, some studies are terrible but these studies are pretty great and that's why all this is fine. My point is that so-called "anti vaxxers" are not actually lazy or whatever, but rather that they're doing Bayesian reasoning with very different weights and categorizations attached to terms like "study". They don't trust scientific institutions or public health bodies anymore, and if you don't trust these bodies then it is rational to reject their advice, given that the only real evidence you have that it's a good idea is their own studies, yet your prior on scientific claims by these people being honest/true is very low. For example, because COVID has been a nudge train from the start, with Dr Fauci telling the NYT at least twice now that he deliberately lied about masks and vaccine herd immunity thresholds in order to nudge or manipulate people's behavior.
That's why trying to explain resistance to COVID vaccines as anything other than an expected outcome of previous institutional actions is a non-starter for me. I suppose you could potentially encompass non-trusting people under "feel safe already", but people who don't trust studies anymore seem by far the biggest contributor, and it deserves to be stated explicitly.
You can just download the Pfizer and moderna studies and read them and look at the graphs. They are “their own” studies only if “they” is “every scientist at every university”, and Pakistan and Israel and South Africa and Cuba have their own safety and efficacy studies if you entirely distrust the US. There’s nothing Bayesian here - if the Pfizer study had a sample size of 50-500 and was based entirely on a written survey, I would dislike it too. And if social psych studies were instead n=50k, had direct measurements, had directly relevant and usable measurements and experiments, and observed effects with r^2 = .99, p<.0001, posterior Bayesian analysis p>.999999, and were replicated many times, I’d like social psychology a lot! The CDC and WHO are not Pfizer, and additionally many of there people trust the CDC and FDA and Pfizer and healthcare as a whole to treat their injuries, do yearly checkups, give their parents pills, and develop Jews ones. So I don’t see how some health bodies being retarded means that other ones are totally untrustworthy when you can just look at the studies and see they’re strong. Fauci and NYT may have lied or whatever, but the evidence is directly available on vaccines from many countries and studies. And I don’t think any of them are using a bayes theorem (and nor should they ... just read some of the evidence),
“I don’t trust studies anymore” is ridiculous, you do trust studies - trust them to ensure your tap water doesn’t have too much chlorine, trust them that your antibiotic actually kills bacteria, trust them on the safety measurements of the car you drive, trust them that the toothbrush and toothpaste you use prevents tooth decay and cavities... and again you can just look at the evidence here, even if you don’t trust studies you can observe that half the country is vaccinated and use a non self reported source of information about population health whether anecdotal or internet statistics and you’ll see stuff.
These people don't trust academics any more than they trust big pharma or big government. They have the more old-fashioned (conservative, little c) approach of trusting personal experience and personal relationships. They don't trust the tap water studies, they trust their experience with having drunk the tap water where they live since they were children. They don't trust the studies on the antibiotic, they trust their doctor, whom they have a personal relationship with, and even him only if they are already sick. They don't trust studies for tooth-brushing, they trust their parents who made them do it from a young age. They don't trust car safety studies at all - many of them are the same people who oppose helmet and seat belt mandates and think airbags will make them lose control of the car during the crash. They aren't scientific people, and make decisions on non-scientific (usually personal relationship-based) grounds.
On vaccines they trust people they trust around them. In conservative places, these are conservative people who in turn trust other conservative people who all don't trust this socialist medicine vaccine. (In liberal places it is the same way, hence hippie anti-vaxers who think the MMR vaccine will poison their kids.)
I guess I’m unlikely to agree with you about the cost/benefit of vaccines but find myself in strong agreement with your comment. Well said
"or feel safe already, and have good reason to." Such evidence as we have suggests that someone who has had Covid is at least as safe as someone who has been vaccinated. Vaccination in addition may make him safer, but not by much.
Further, for non-elderly people with no special risk factors, the chance of dying if infected is about one in ten thousand, so if they think the chance of getting infected is one in ten, vaccination reduces the risk by about one in a hundred thousand, reducing life expectancy by about twenty seconds. Choosing not to get vaccinated is not, under those circumstances, obviously irrational.
I've just put up a blog post with a more detailed analysis. My one in ten thousand figure is correct for a 25 year old according to one source I found, low for older non-elderly.
http://daviddfriedman.blogspot.com/2021/08/vaccination-arithmetic.html
As a non-US citizen, I am baffled by the usage of tipping as an example for a "rational economic actor". I get the point you're trying to make about nudging, but tipping is much more of a cultural custom than anything that is done for economic purpose. Customer service still exists in the countries where people don't tip, after all.
To me it seems that US citizens tip for the same reason Russians remove shoes when entering the house, or Swiss shake both men and women's hands when greeting - it just feels weird not to. Case in point, Americans tip abroad too, when there's absolutely no incentive to do that.
What if the barrier between economic purpose and cultural custom is porous and indiscernible? And what if that helps undermine the concept of economic irrationality bias as a whole? How does one explain financial system failures or buying bad bonds and bubbles - bias? More “””””culture”””” - just the different complex characteristics of the financial system that are part of itself and can be understood only as the financial system itself is.
In this case point was not about tipping in theb first place.
It was about completely irrational difference in how much he tends to tip, as a result of an interface design.
Do non-swiss people shake mens' xor womens' hands?
> the same reason Russians remove shoes when entering the house... - it just feels weird not to
That's mostly a consequence of the infamously shitty climate and Russians not being rich enough to always instantly get in a car every time they exit a building. Preferring not to have wet dirt on your floor everywhere isn't an arbitrary whim.
> I knew all this, but it was still really hard to guess. I did it, but I had to fight my natural inclinations. And when I talked about this with friends - smart people, the sort of people who got into medical school! - none of them guessed, and no matter how much I argued with them they refused to start. The average medical student would sell their soul for 7.5% higher grades on standardized tests - but this was a step too far.
A great microcosm of why behavior Econ is bad IMO. I had tests like this, except for math competitions, and everyone guessed. It’s all very local and specific and contingent in ways that behavior economics’ methods are neither equipped nor desiring to measure.
> Nobody believes studies anymore, which is fair. I trust in a salvageable core of behavioral economics and “nudgenomics” because I can feel in my bones that they’re true for me and the people around me.
Psychoanalysis, behaviorism, Christianity, faith healing, homeopathy, chiropracty, new age cults, hypnosis, etc. again, it’s all local and depends on so many different things. But it’s not generalizable at all in the way they imply - a different person who had learned different things in childhood (and think learning in the sense of learning math or sociability, not “u were too nice to child so he is narcissist” type psychoanalysis nonsense popular a hundred years ago) or just was in a different situation (poor person who grew up on farm taking the survey who really cares about doing what he’s supposed to to succeed vs rich kid who grew up in school and has learned to just daze through tests he takes and just wants to get over with the survey quickly). It’s totally possible for a population or cultural happening-local phenomenon to be true, but also only true in the sense that people choose to do that because they were taught it’s rude not to tip 20% or something and if they were told not to they wouldn’t, and that seems very different from the sort of claim that I sense from the field.
> Galenter and Pliner
Based on demonst’s thing, it seems like they didn’t find instantaneous loss aversion, but large scale loss aversion, which is much more “diminishing marginal utility” style? dunno
> not rioting at systemic racism
Needing a scapegoat or martyr to riot is *very different* than caring only about a scapegoat or martyr. Millions of people care very deeply about systemic racism and blacks people in America - correctly or not - and they did before Floyd. The dynamics there are not at all well described by a general “population or individual”. Individual cases are easier to *prove*, as we saw with the video that made it blow up - literal thousands of other cases of individuals did not gain traction because they didn’t have good videos or evidence despite being individuals. And plenty of other population phenomena cause protests!
On the Identifiable Victim Effect: for many things there are lots of examples of individual victims, but only a few go viral.
A good example here is people unable to afford healthcare using crowdfunding sites. Some people are able to raise tens or hundreds of thousands. Many more get a few hundred bucks and that's it. But a lot more donations are made to crowdfunding than to healthcare charities that provide care for people who can't afford it (as distinct from other types of healthcare charity, like research).
Perhaps if you had a thousand different stories about a thousand different single mothers written by a thousand different people, three of them would raise a lot more money than the generic story about single mothers in the abstract, and the other 997 would not show an identifiable victim effect.
There were a bunch of proto-George Floyds, like Tamir Rice and Eric Garner, who went viral but less so. There were lots of local ones who went even less viral than that. Whatever it was about Floyd that got his story to go viral in a way that the others didn't, that's the Individual Victim Effect. Perhaps it should be called the Sympathetic Individual Victim Effect or something?
"you value something you already have more than something you don’t."
Really sounds nitpicky vs "loss aversion" - wouldn't they produce the same behavioral outcome precisely 100% of the time? Can anyone delineate Loss Aversion and Endowment Effects?
I think the Endowment Effect should be unrelated to financial value. Loss Aversion would predict that it hurts more to lose something that is worth 100$ than 1$. It would predict that you stop caring for losses of value zero. But the Endowment Effect would still let you prefer the sticker in your hand over the potential other sticker, even though the sticker is completely worthless.
So there are situations where the two are not the same. But I agree that the overlap is pretty big, and in most situations it's probably hard to see a difference.
Loss aversion and endowment effect are totally different. The canonical example of the endowment effect is where the researcher offers to sell the control group a mug and they offer an average of $3, but when they give the intervention group a mug for free, they demand an average of $5 to sell the mug back to the researchers.
This can't be expressed in terms of loss aversion, because there is no loss. It's just a question of how much they value the mug. When they were buying it, they were only willing to pay $3, but once it was given to them, they were willing to walk away rather than sell it for $4.
I haven't really dug into the studies, so I don't know that this really demonstrates the endowment effect. Maybe this was just strategic, and they were trying to get as much money as possible. But that's what it is in theory, anyway.
I think I may have an incorrect interpretation of loss aversion - perhaps that's why I'm conflating loss aversion and endowment effect. I always thought of it as "people are more sensitive to costs than they are to benefits" Here's how I see your canonical example:
The person is averse to 'losing' the cup, so they require more money to sell it.
The person is averse to 'losing' their money, so they will give less for the a cup that they don't own.
I have no formal education in behavioural science - am I over-interpreting 'loss aversion'?
Here's my attempt!
The endowment effect can be viewed as an irrationally increased value placed on the feeling of owning something. It's like a "but it's miiiine" bias. Not that valuing anything -- even (especially) vague feelings -- is irrational. But it's irrational if it's like an addiction that you don't *want* to want. Imagine you'd pay up to $100 for a snozzwobbit, meaning your value for it -- including the warm fuzzy feeling of owning it -- is $100. That means losing the snozzwobbit -- and the concomitant warm fuzziness -- should drop your utility by that same $100. If not, then something fishy has happened.
(Of course there are a million ways to rationalize such asymmetry. Maybe you didn't know how much you'd like the thing until you experienced it, or you just don't want to deal with transaction costs or risk of getting scammed. But if we control for all those things -- and many studies have tried to do so -- we can call the asymmetry the endowment effect.)
Loss aversion is a bit more general. Technically it means having an asymmetric utility function around an arbitrary reference point, where you think of a decrease from that reference point as a loss. Again, the weirdness of a utility function isn't itself irrational -- you like what you like. But the arbitrariness of the reference point can yield inconsistencies which are irrational. You can reframe gains/losses with a different reference point and people's utility function will totally change.
Consider the Allais paradox. Imagine a choice between (a) a certain million dollars and (b) a probable million dollars, a possible five million dollars, and tiny chance of zero dollars. People mostly prefer the certainty, which is entirely unobjectionable. Now imagine a choice between (a) a probable zero dollars and possible million dollars vs (b) a slightly more probable zero dollars and slightly less possible five million dollars. Now people feel like they might as well go for the five million. And -- here's the paradox -- you can put numbers on those "probables" and "possibles" such that the choices are inconsistent. Rationally, either five million is sufficiently better than one million to be worth a bigger risk of getting nothing, or it's not. In the first choice you can guarantee yourself a million dollars and you don't want to risk losing that. In the second choice there's no guarantee, only bigger or smaller gains.
Thus is your reasoning distorted.
If you're talking about your utility for snozzwobbits then the obvious reference point is the number of snozzwobbits you currently own. If your utility for an additional snozzwobbit is much less than your disutility for giving up one of your snozzwobbits, that's suspicious. Still not inherently irrational; maybe you have just the number of snozzwobbits you need and one more would be superfluous. But if we see that same asymmetry -- how much you'd pay for an additional snozzwobbit vs how much you'd sell one for -- regardless of how many snozzwobbits you own, that's irrational.
So there you have it. The endowment effect is a kind of loss aversion where your arbitrary reference point -- as in your value for snozzwobbits -- is however many you currently own. And the Allais paradox example shows that literal endowment/ownership isn't required for this cognitive bias to appear.
Thank you for this! The difference is definitely very subtle. If I've understood your explanation correctly, the endowment effect predicts everything loss aversion would predict in the cases where it's applicable, however loss aversion actually covers a larger set of issues where people perceive a potential 'loss', which does not necessarily require anything to be owned.
PS: I turned this into a blog post: https://blog.beeminder.com/loss
"All subjects were entered into a raffle to win a gift certificate for participating in the study, and they were offered the opportunity to choose to donate some of it to single mothers. Subjects who saw Ad B didn’t donate any more of their gift certificate than those who saw Ad A. This is a good study. I’m mildly surprised by the result, but I don’t see anything wrong with it."
I wonder about this. I see a lot of ads in magazines etc. which do this exact thing - "Here is John, who is living rough on the streets for three years since his abusive stepfather threw him out of the family home at the age of fourteen". Usually there's a small-print disclaimer about "Photo is of actor, not of real homeless person" but you can generally *tell* that this is indeed an actor pretending to be the real person, the same way that radio ads where it's purportedly Mary and Sheila talking about this great new furniture store that just opened, you know it's two actresses and not real customers.
So maybe that has an effect - when you can tell this is "Actor playing a part" it doesn't hit you like "this really is a kid sleeping rough" where you would see them in a news story or documentary.
I think it being in a raffle also had an effect; this wasn't people choosing to make a donation based on General Ad A or Personalised Ad B, this was people being asked to give up part of a prize. I think in that case people are making decisions based on "how much do I think is reasonable to give, out of the prize I won?" rather than "will I give my donation based on how hard my heart-strings were tugged?"
(I hate heart-strings tugging campaigns because I *know* they are trying to emotionally manipulate me, and this annoys me so much I deliberately *won't* donate to such efforts).
I think there might be a general principle here that applies to most of behavioral economics. People, in general, have some level of self-awareness and capacity to learn. They also tend not to enjoy being manipulated. This can lead to what I might term "hardening" against manipulation attempts, where people gain the ability to recognize when they are being manipulated and develop a strong aversive reaction to it.
I think that many advertising tricks motivated by behavioral economics may end up "hardening" the general population to the same tricks that they employ. I wonder to what extent gains realized by behavioral economics represent lasting increases in the efficacy of manipulation, and to what extent will be offset by "hardening" of the general population against these techniques.
Without knowing much about the details of 'behavioral economics' it seems to me the criticism of the detractors is largely that while some effects can be shown in studies, it is not nearly as neat and generalizable a concepts such as 'loss aversion' would suggest. A lot of it is either fairly common sense, or rather very complicated effects, but giving it the branding of 'behavioral economics' and reducing it to a couple of basic tenets is a marketing strategy of Kahnemann etc. rather than scholarly ingenuous.
I may have read about it in the book "Nudge" but one example that stood out for me was for new employees and 401K plans. Although it's a great idea to sign up many don't because of a lack of understanding and a menu of confusing "investment choices" which almost nobody wants to learn about and have to pick from. By making it opt-out and offering a choice to "just take the default most recommended investment allocation" the participation goes way up. These are not the precise details but in these kinds of cases, I can see where a "nudge" can make an out-sized impact.
To me, arguing whether loss aversion *or* status quo effects are real is a bit like arguing whether the world is made up of centimers or inches. They're just different representations of the world, describing large chunks of the same territory through slightly different lenses. Maybe one of them will prove slightly more useful than the other? Anyway, I agree with Scott that this is a far cry from proving loss aversion wrong.
Did you read Gal and Rucker? I thought they did good work doing experiments designed to prompt loss aversion but not status quo effects, or vice versa.
Is this the same Jason Hreha who founded the Walmart Behavioral lab - the first Fortune 50 behavioral economics team? Who made and sold a startup, is in Stanford, and is listed along with Dan Ariely in a behavioral scientist web site as co-authoring an article?
If so, this is more interesting since it is Hreha repudiating his own considerable economic success.
This comes across as too cute by half.
As I understand it there is no career/professional value in producing a study that says, "I attempted to prove that the conventional wisdom X is false. After 18 months of study, reams or data and careful analysis I've come to the inescapable conclusion that the conventional wisdom is in fact correct."
The idea of risk seems a bit absent here? It that how it really is in this field?
Investment banking acts to maximize expectation, with low levels of risk aversion, and we have global financial collapse every time regulatory restrictions are relaxed a bit to allow them to make more money.
Black Swan or Skin in the Game are probably good books on this, but the claim Taleb makes is that we are woefully bad at predicting risk; and that actually what are measuring when we can measure risk aversion is the level to which economics is struggling to understand risk and payoff, not people in the street who generally do a better job of keeping their affairs in order than businesses employing lots of economics/financial/probability experts that constantly need government bail-outs.
Wilmott (the quant, with a journal of the same name) argues quite persuasively in various places that even our current idea of "correlation" as defined in probability theory, is worse than useless for understanding risk and payoff.
None of the global financial colapses (since at least the great depression) were preceded by relaxation of regulatory restrictions. Most have been preceeded by complex regulation, then innovation in the financial market in the contexts of those regulations where those innovations carry systemic risks that noone (writ large) could recognize due to the complexity of the regulations and the innovations.
If you have a counter example, I would very much like to know, and let me know what, specificially, was the relaxation of regulation that you are thinking of.
I'm not an expert, I'm afraid, I just went to a talk on it and I'm dumbly repeating what I saw there. But I think it was the following
2000 Commodity Futures Modernization Act (repealed a ban on some risky financial instruments)
1999 Gramm-Leach-Bliley Act (removed restrictions separating commercial banks and investment banks)
1982 Garn-St. Germain Depository Institutions Act (loosing restrictions on issuing mortgages)
1980 Depository Institutions Deregulation and Monetary Control Act (repealing some interest ceilings)
What pieces of complex regulation are you referring to? Stuff like the mandates to give more loans or sub-prime mortgages?
"no-one (writ large) could recognize due to the complexity of the regulations and the innovations" -- are you talking about CDOs? I thought people (like Wilmott, who has a journal and wrote textbooks) said CDOs were toxic for quite a long time before the crisis?
But it sounds like we agree that banks did not understand the risks they were undertaking; and that it was not isolated but across the board. And as far as I know, these guys employ THE experts in economics and probability.
I disagree that there's risks you can't recognise due to the complexity. Well sort of. Fine: you can't identify specific risks within the complexity -- but you know that with increased complexity you know there will be increased risk -- so sufficiently complex things you don't touch if you're risk averse. And being risk averse is just sensible.
So my point still stands: think of our best understanding of risk as a black box that says "too risky" or "fine"; and we have evidence that it has a strong habit of going "fine" when it really should say, "too risky". Positive risk aversion results are where we use this box with average people and when it says "fine" and people say "too risky" we say the box is correct because MATHs and the people are wrong because PEOPLE?
I don't think that any of those acts relaxed regulations when considered on net. They allowed some things to occur, made them really complex, and each time made the rules more ambiguous allowing regulators to make it up as they went. I'm not an expert either, but that's the take I get from people I know in compliance in the financial industry. Each one of those acts is over a hundred pages, spawns thousands of pages of regulations. I don't know if its true or not, but Milton Freedman was supposed to have said that NAFTA wasn' a free trade treaty, because a free trade treaty would be about a page long.
By writ large, I mean to imply that the "no-one" is used coloqually not precicely. Yes, there were people who saw it all coming. No one saw covid coming in the same way. Yes, we can point to the rationalists and the preppers show saw it coming, but society as a whole did not see either one coming.
The issue isn't complexity per-se. The issue is the type of complexity - its new, and its fiat. We deal with old, emergent complexity all the time, both in economic spheres and other spheres. Growing food and getting it onto people's plates is an incredibly complex task, but we've been developing the social know-how to deal with it since forever. A new complexity causes problems because there's no social know-how - suddenly implementing covid protocols caused problems in the food supply in the US.
A new reg can be simple or complex - and simple regs do exist. But most regs in the financial sphere are really complex.
And even that might not be the worst thing in the world. The fiat nature of the rules is the real kicker. Emergent rules may not be perfect (like at all), but they are the product of people trying to get shit done. They are almost always the product of people that are the most invested in getting the shit done, and they are done by the people with the most local knowledge. We may not like the rules that the interested parties would come up with on their own, but at least they would be as well informed as is possible.
What we get with financial regs are: Complex rules, put into place before anyone knows what their full effect will be, based on the design of people with less information than those being regulated.
That leads to people doing exactly what any cynic would expect - malicious compliance, second order effects that noone knows about, and regulated parties using their superior information to subvert the rules.
We do agree that banks did not understand the risk they were taking (again, writ large, there are banks that refrained from dealing with CDO's, but we know what you mean). In your last paragraph, I think you are saying that risk aversion is more rational (or maybe more beneficial?) than simple expected value calculations in lots of places even though lots of poeple say it not. I would agree with that if that is what you are saying.
I could have sworn that Thinking Fast and Slow talks about loss aversion in terms of a faster-than-linear curve (maybe y=x^2, or something like that). Maybe they didn't say "small losses don't really matter" in english, but if you draw the curve near zero, its apparent.
Is my memory off?
"Unfortunately, the findings rebutting their view of loss aversion were carefully omitted from their papers, and other findings that went against their model were misrepresented so that they would instead support their pet theory. In short: any data that didn't fit Prospect Theory was dismissed or distorted.
I don't know what you'd call this behavior... but it's not science."
You know, I'm reading Structures of Scientic Revolutions, spent over a decade in academia, and spent a decade in an industry turning science into products. This sounds exactly like science to me.
Harsh but fair
I actually didn't mean it to be. Science is a process, one that works really, really well at generating insight out of the work of flawed humans. It doesn't require angles, just some greedy humans willing to skewer their colleagues and rivals in front of an audience.
Sounds like you do have to play the angles, even if you're not an angel. (Sorry, typo humor can be irresistible.)
Its a cute pun, I don't mind mocking my own typos.
There's a cliche that 90% of human behavior involves giving, receiving, or bartering for attention. These three activities seem to me to correspond to production, consumption and exchange in economics. That is, exchange theory alone is probably an insufficient foundation for behavioral economics.
This is a brilliant defense of the field and I'm really grateful for it! Another thing that I believe to be reasonably unscathed by the replication crisis is research on present bias (aka akrasia) and commitment devices. Phew for us!
PS, not a correction to Scott's post per se but maybe a correction to an impression readers will likely have. I don't know if it actually matters but is interesting:
Hreha posted that article a full year ago. It reads as (and is) a perfectly apt reaction to the Ariely affair and I presume that when Hreha noticed it being circulated he just savvily removed the date from the article so people wouldn't be distracted by that or write it off as insufficiently timely. (I actually checked the internet archive and he made no other change besides removing the date.)
"Previous criticisms of loss aversion argue that most experiments are performed on undergrads, who are so poor that even small amounts of money might have unusual emotional meaning. Mrkva collects a sample of thousands of millionaires (!) and demonstrates that they show loss aversion for sums of money as small as $20."
The millionaires are interesting, but I suspect they aren't thinking about the real sums but instead about their relative terms. To me, all of these experimental propositions feel less like "Would you like to win a small sum of actual money?" and more like "Come up with a correct heuristic for comfortable gambling on sums significant to you."
Personally, I find it weirdly difficult to isolate a single instance of a favourable-odds gamble from the possibility of a ruinous, if quite unlikely, losing streak under the same odds.
I woud expect new money vs old money to behave differently too. Lots of millionare-next-door types got there by worrying about $20 repeatedly.
Re: the Endowment effect, I don't think it's necessarily a cognitive bias as it is just a premium on information. In a world where people are occasionally swindled by others, the expected return on a trading your mug for another that someone else *claims* is identical is, in fact, negative - once you account for a >0% chance that they might be trying to pull a fast one. The researchers might know for a fact that the endowed coffee mug and the one they offer are the same, but that fact isn't available to the study participants, so it is rational for them to ask for a higher price to offset their risk of losing an apparently functional coffee mug. If the subjects are allowed 10 minutes to study and fidget with mug A, then given 10 minutes to study and fidget with mug B, and *then* asked which one they wanted, and they mostly choose the first one anyway, then that's weird and seems to fit the bill for a bias. Maybe those studies exist, does anyone know? To me it seems that what gets cited for examples of the endowment effect is (like in Kahneman, Knetsch and Thaler's study) where researchers take two things that should be at the point of price indifference on the open market, note the subject's preference for the one they know more about, and then wave their hands and say "loss aversion!"
This doesn’t have much to do with loss aversion or the main point of the post, but when you talked about choosing a tipping amount, it reminded me of a thing I did in high school. (I’m not sure if I came up with this thing myself or heard of it from somewhere else. For all I know, this is part of some famous psych study.). I would write the numbers 1 2 3 4 spaced out horizontally on a piece of paper. Then I would ask a random person (well, as random as I could conveniently manage from my high school) to choose one. On the back of paper, I had already written - why did you choose 3? Because it quickly became obvious that 3 was the overwhelmingly favorite choice. In my not-quite-random, n = 100ish experiment, 3 was chosen about 90% of the time, 2 was chosen about 10% of the time, and I don’t recall anyone choosing 1 or 4. I never let anyone participate that had witnessed anyone else making a choice, and I didn’t let them know the choice distribution before they made their own. I wonder if your choice of a tipping amount (3rd choice out of 4 ascending options) is almost the same thing, whatever that thing is. I never searched for an explanation, but it was a fun way to pass through the boredom of public schooling.
That makes sense to me, as I understand people. It's the same phenomenon that gets people to realize that the lottery is unlikely to come out to the seven numbers 1, 2, 3, 4, 5, 6, 7 where they don't see it if the numbers are 21, 56, 13, 45, 19, 5, 29. Both sets of numbers are exactly as likely as the other, but one feels more random and we expect random. If you ask people to pick among four numbers "randomly" then they are going to want to avoid the extremes. In this case the 1 and 4. I'm less certain on the difference between the 2 and 3, but intuitively I want to pick the 3 as well. Split the middle and round up?
Nice experiment
Supposedly the "winning-est" number if you play this game over the range 1-100 is 37.
Given that Loss Aversion and other behavioral economics theories are largely applied to sales and marketing applications, it seems that you could aggregate and genericize sales data from e-commerce platforms that compare a loss-aversion framing to another, more neutral framing. From a practical standpoint, we do this kind of testing all the time. So it seems like these A/B testing could be modified to provide alternate data points on this topic.
I understand that this method may not provide a perfect testing environment. I'm just wondering if gathering data from "real world" experiments would provide additional reference points.
Also, one has to wonder if companies like Google and Amazon already have reams of this kind of data/analysis that we don't have access to.
Yeah. It was mostly a figure of speech. I KNOW they have this data. But I'm not expecting them to partner with a research institution to share it.
I used to work in retail (entry level starting in high school), and the company I worked for did sales just about every week. Sometime in the middle of the week the specific prices and items might shift around, but generally speaking almost everything was on sale for 40-60% off of the "suggested retail price" every week. A new CEO came in and got rid of the sales, and just started doing Walmart pricing - cheaper all the time. The prices were pretty much identical, and so were the items, but didn't involve coupons, waiting for a good sale, or anything else.
The customers hated it, and sales dropped like a rock. It turns out, they really liked getting a "good deal" on a higher priced item. They felt like the $100 price tag meant it was a qualify item, but the $40 sale price meant they were getting a steal. Just selling them a $40 item was a low quality item and no discount at all! I think it works for Walmart because people expect fairly low quality and believe the prices are unusually low. Experience shopping there seems to confirm both are true.
JC Penney tried this same thing and also undid it as it didn’t work. Sadly. But this really is specific to the many details of the modern retail environment, where much participation is already pretty dumb/“irrational” (lol fashion prices) so individual details being irrational is maybe unsurprising.
https://www.nytimes.com/2013/04/14/business/for-penney-a-tough-lesson-in-shopper-psychology.html
There's also an issue of traffic generation. If you create a need to come into the store to see what's on sale that day/week, then you will likely sell other non-sale items as well. Constant low prices may not harness this effect.
Could that be because they already had their niche audience: the "deal shoppers", while Wal-Mart had their niche audience: "want it cheap" and that neither way is particularly superior (or even maybe Wal-Mart's is superior based on it's track record) but changing mid-stream means you lose your audience you've worked to build?
Yes, that was my thinking as well. The clientele of your business doesn't like when you change your business. Nobody needs to shop at a particular store, so when you change it too much they move on. I was more interested in the specific mechanism, where they loved to see something marked up to an obviously too high initial cost, and then "reduced" the cost to a more normal level. For some reason a particular kind of person is willing to pay slightly higher-than-normal prices for something if they think it's a really steep discount from some much higher number.
The think with your george floyd anecdote is that even if the identifiable victime effect wasn't real, it doesn't mean people would *never* rally around an identifiable victim, its just that they aren't more likely to do so. So its entirely plausible that for other factors, or just randomness, people rallied around an identifiable victim in this case
I think you poisoned the well unintentionally with "statistics don't start riots" because now people are fixating on riots. Rioters were an infinitesimal proportion of people who were moved to care or act in some way by the recording of a single individual's personal experience, but were not moved to care or act in the same way by statistics. Those can't just be explained away by "well, people like to riot."
I think what you may be missing is the study design of contrasting just statistics with just an example of one single mother isn't comparable to something like a video of police beating up or killing a black person. The latter isn't happening in a vacuum. People being moved to do something after watching George Floyd isn't a confirming example of identifiable victim effect. Those people are still reacting to an aggregate of events; that is, they're reacting to statistics, or at least to their perception of statistics. The reaction is to the belief that this kind of behavior in police is pervasive and widespread. The one video is just a precipitating event. It didn't cause the response on its own. "Straw that broken the camel's back effect" is not a cognitive bias. It's just the way multifactor causation works.
The endowment effect is a nice example of the ambiguity of the claim that behavior is irrational. At first glance, it is irrational to value things you have more than identical things you don't have. But that pattern of values makes sense as a commitment strategy for enforcing property rights. If I am willing to fight hard to defend something I have and other people know that, then with luck people won't try to take things away from me and I won't have to fight for them. If I am willing to fight hard to take things other people have, I am likely to get into a lot of fights. Think of it as analogous to territorial behavior in animals.
I discuss some of this in an old article:
http://www.daviddfriedman.com/Academic/econ_and_evol_psych/economics_and_evol_psych.html
Some behavior being irrational doesn't mean it was never rational at any point in the historical development of human cognition. Hormonal responses to food flooding you with a desire to chow down calorie dense foods presumably has a plausible evo-endocrinology explanation in terms of being an advantageous behavior in the savannah of 300000 BC, but it has become pathological in an environment of caloric abundance. Similarly, attachment to personal belongings that caused you to send strong signals to random strangers in the Hobbesian nightmare of proto-human pre-history that you would fight hard to protect them may have been advantageous then, but retaining the behavior today when we have walls and locked doors makes less sense.
That's all a cognitive bias is. It's a decision framework that works perfectly well in one context bleeding into the brain at large and taking over decisions in contexts in which it no longer works.
It was rational then and it arguably is rational now. It's a commitment strategy. Even in a modern society, actually protecting your ownership of things requires efforts by you, in some cases efforts that cost more than the thing is worth. Being committed to do it means it isn't in someone else's interest to try to take your things.
My article is largely on things that fit your pattern, that were rational in the environment we evolved in but no longer are, but this one may well still be rational.
I have a hypothesis on the identifiable victim effect question. The identifiable victim effect doesn't exist (at all). Rather people respond to archetypal stories and sometimes an archetypal story is easier to construct using identifiable characters. George Floyd is actually some kind of archetypal story about the powerful abusing the weak told though the modern medium of a smartphone camera, and perhaps the combination of plot, characters and medium (and performance) is effective in telling that archetypal story in a way that statistics about police violence cannot be.
This superficially sounds the same as Hreha's observation that "... creative solutions that are tailor-made for the situation at hand *always* perform better than generic solutions based on one study or another." but I don't think it is. For one thing this places finite and measurable bounds on definable characteristics and suggests a path to producing a fully testable hypothesis that describes a framework for engagement with issues through media. Such a hypothesis, if you were able to even partially validate it, would in turn suggest paths to moving the "creative arts" into the realms of social science.
I imagine that you could describe stories (including interventions through media etc) as n-dimensional positions in vector space and assess the effects they would have on viewers. Actually, now that I think about it, this is all we're doing with recommendations on netflix, youtube etc, except we want to measure actions and sentiments about things that occur off the platform and we want to test the idea that certain elements of presentation relate to the literary structure to produce those different outcomes....
This isn't my field, so feel free to correct my ignorance.
Ah, behavioral economics.
I've read Thaler's "Misbehaving", and the story it tells is of a discipline fixated on an axiomatic notion of rational actors (using a particular definition of rationality), to which behavioral economics, with their experimental, interdisciplinary, real-world approach was a necessary correction.
Even remembering all researchers overblow their work's impact and importance, and trying to be careful about accepting anything that confirms my priors, I see no reason to assume the story isn't roughly true. This would make behavioral economics an important step in the progress of scientific paradigms even if all of their specific theories turn out incorrect, simply by pointing in the right direction that would otherwise continue being ignored. (The first question I would ask of its critics to assess whether they're worthy of listening, then, is "what's your alternative?")
There is, of course, a much less charitable interpretation of the above, which is that behavioral economics constitute, to paraphrase Robert Skidelsky, "not any new insight, but technical prowess in making an old insight acceptable to the economics profession". This impression is exacerbated by the fact that practical applications pursued by their practitioners turn out to be some "nudges" on the margins, aiming to exploit the "irrationality" or lead the "irrational", "misbehaving" people towards a more "rational" outcome. Essentially, all its momentum comes from catching up with advances in other fields of social science, and adapting them in the way that left the entire discipline of economics, its goals and underlying assumptions intact. If one thinks economics is in a serious need of paradigm change, well, this clearly ain't it.
On yet another end, it still means the field of economics can now direct its funding into sound empirical science, which seems to benefit everyone. (I now see many psychologists and other social scientists cite and praise economists' research, which, given psychology's recently exposed thorny relationship with research standards, obviously.)
Regarding the medical test story it is not so clear to me whether it is as simple as all that.
Let's say we have N questions, modelled by N not necessarily iid discrete random variable X_i taking values in { 1, 0, -0.5 } and also modelled by Y_i taking values in { 1, -0.5 }, it may still be possible that E(\sum X_i) >= E(\sum Y_i)
Here's an attempt at a proof. Let's take just one question. We can represent probability distributions over the three choices as points on a 2-simplex, i.e. (p_1, p_2, p_3) such that \sum_i p_i=1 and p_i \in [0,1]. Their expectation given the scores {1, 0, -0.5} is just a linear map to R. Similarly, we can represent the probability distribution over just the two choices as the 1-simplex face of the above 2-complex. Now, for example, the uniform distribution is a point in the centre of the base of the 2-simplex. Now 0.25 is a regular value of the map so it's inverse image is a 1 dimensional submanifold of the 2-simplex (treated as a manifold with boundary) meaning there is a curve from the point (1/2, 1/2, 0) representing the uniform distribution over just T and F to the interior of the 2-simplex. Which means I can have a distribution (p_1, p_2, p_3) which gives the same expectation.
What is my intuition behind this? Say for a given question I know that I don't know the answer is True as much as I don't know the answer is False then it make sense to assign them equal probability. However if I am slightly less confident of one answer over the other -- this seems more realistic - while still being in the region of guesswork (for me) then the probabilities to be assigned will not be uniform. That is, the Bernoulli parameter associated with the question is itself not uniformly distributed. In fact the Bernoulli parameter for a question need not even follow a discrete distribution and in that case I have to further figure out how to compress it to one number.
"But also: there are several giant murals of George Floyd within walking distance of my house. It sure seems people cared a lot when George Floyd (and Trayvon Martin, and Michael Brown, and…) got victimized. There are lots of statistics, like “US police kill about 1000 people a year, and about 10 of those are black, unarmed, and not in the process of doing anything unsympathetic like charging at the cops”. But somehow those statistics don’t start riots, and George Floyd does. You can publish 100 studies showing how “the Identifiable Victim Effect fails to replicate”, but I will still be convinced that George Floyd’s death affected America more than aggregate statistics about police shootings."
I'm surprised this point didn't go in a different direction. I think with the notion of Identifiable Victim Effect there's a lot of context missing. And I suspect this may be true of a number of different biases. I think the George Floyd poster, and all its attendant implications, has to do with *stacking* - look at the parentheses: "(and Trayvon Martin, and Michael Brown, and…)"
The same point can be made re: marketing efforts based on nudging - the cheesiness of a particular marketing campaign is not only a function of what the nudging seeks to achieve but also the zeitgeist-based (rolls eyes...) context in which it functions. Which I guess I why music, and marketing, needs periodic reinvention.
The broader point here is I would love to see research looking at how biases operate contextually - how many publicly-adjudicated Identifiable Victims does it take to make for a population group to start exhibiting bounded rationality?
I have two basic questions about loss aversion and wondering if these are beside the point or have been addressed in the research:
1] The experiments you generally read about are with small sums of money, say $100. If a person has a total wealth of say a few 100k or more (you may include any discounted future earnings in there if you like), at the scale of $100 my utility function should be ~linear. So I should be risk neutral. So in order to establish loss aversion experimentally wouldn't you need to be dealing with sums that are actually material to the person?
2] It's all good to present a fictional bet in an experimental setting, but in the real world someone needs to take the other side of the bet. Say everyone's loss averse, one person's loss in another person's gain, how do you get to an equilibrium?
Behavioural economics is trying to solve all of economics by first solving all of psychology. If you think about it, this really is the logical end goal. The plan is to predict exactly what each individual is thinking and then make economic predictions factoring in each and every possible action (or at least the average) a human could make. This seems so stupid. How is this the cutting edge of economics? Why can't we just give people free healthcare?
This is Steve Sailer 101, sans golf course design references.
(BTW, Steve, I played Dunes Club a few weeks back - it is that good. They change the pins between 9s.)
Behavioral economics and social psychology tried to make iron rules of human behavior. Human behavior is constantly mutable. So iron rules do not exist.
>It sure seems people cared a lot when George Floyd (and Trayvon Martin, and Michael Brown, and…) got victimized. There are lots of statistics, like “US police kill about 1000 people a year, and about 10 of those are black, unarmed, and not in the process of doing anything unsympathetic like charging at the cops”. But somehow those statistics don’t start riots, and George Floyd does.
The ~1000 number is specifically for those fatally *shot* by police, not deaths from all causes. That might seem like a quibble, but between that and the 'police' qualifier the statistic is only capturing one of the three specific cases listed - and not the most salient one, either.
I'm uncertain how much I'd disagree with the point being made even if it was off by an order of magnitude, but it's a notable error when used for rhetorical flourish and IIRC Scott has made it before. Might see if I can dig up the precedent...
Hi Scott, I think you're generalizing too much from your own experience. When I faced multiple choice tests with negative markings I always calculated the optimal strategy for guessing beforehand and stuck to it. E.g., in physics/math GRE there were four choices (or five, I don't exactly remember how many) per question. If one were to guess completely at random then one would get zero or negative score in expectation. However, if one could eliminate even one choice then the expectation would become positive. So whenever I could eliminate one choice I answered it. I know many other people who did the same. Similarly for tipping, I calculate the amount of tip (10% or 20%, so it's an easy calculation) and add it separately. Behavioral economics can't account for such behavior.
Behavioral economics and social psychology are pretty adjacent subjects. Given that much of social psychology's findings have come under grave doubt I would be pretty wary of behavioral economics too.
Dude, write the names correctly. It is Yechiam and not Yechaim.
As a hebrew speaker it really made me a bumpy reading.
To me "behavioral economics" is the study of the Yogi Berra conundrum:
> Nobody goes there anymore. It’s too crowded.
I think so much of this (the actual argument AND the meta-argument) boils down to the transition
(no opinions) -> heuristics -> ideology.
This is a pattern one see everywhere.
You want to buy a car. If you're like most people, you just don't care. You had Toyota, it was good. Then a Ford, it was good. Now you can get a good deal on a Kia.
You want to buy a phone. Well you've used Macs and you like them. You had an iPod and you liked it. The heuristic "I like Apple stuff" seems to work for you. You can spend a month researching phones, or you can go with the heuristic.
But somehow (and I think this is a transition that has been GROSSLY UNDER-THEORIZED in social science) a heuristic can become an ideology. My heuristic (hah!) until I see evidence otherwise is that this is essentially a transition from "I like X" to "I hate not X".
The heuristicist is happy with his heuristic and couldn't care less what choices you make.
The ideologue finds it essential to defend every bad choice Apple makes, to attack every good decision Intel makes. This rapidly shades into hanging out with the Apple people to make fun of those stupid Windows people, on to "how can you go out with someone who doesn't just like x86 but who work for AMD???"
This is everywhere!
Someone uses the heuristic "white suburbs" as a way to solve the problem "I want a quiet neighborhood". All they care about is the quiet part. But those who see the entire world as ideological (along this particular dimension) cannot believe that someone just made a quick choice of this type of neighborhood for this type of reason -- clearly they MUST have been motivated by racism. After all, people are divided in Chevy people and Ford people, there's no such thing as a person who just doesn't give a fsck about their brand of car...
We start with the heuristic "wearing a mask is probably worth doing in spite of the hassle".
In some people this transmutes into "I HATE non-mask-wearers", and because no-one's willing to admit this, we get tortured excuses about "well if they don't wear masks it results in a worse experience for the rest of us". Perhaps true, but when the battle shifts to invermectin and their choice has ZERO influence on your future health, it's still all about hating the other.
This transition from heuristic to ideology seems very easy. Cases based on products are especially valuable for understanding because most of us both have some products for which we have all three relationships: we can't imagine that anyone especially cares about their brand of TV, while caring a lot about a brand of soda, and shunning people who listen to the wrong music.
This is often explained as tribalism, but I'm not sure which comes first. In a lot of cases to me it seems like the heuristic comes first, it transforms to ideology, and then a tribe is discovered. (Maybe that's the loner path, the tribal first path is more common? But on the internet, for fan type things, it definitely seems like the order is often heuristic -> ideology ->tribe.)
So back to the article.
What I see here is an example of this sort of thing. The Behavioral Economics guys are making a bunch of observations (which can be viewed as heuristics -- people will often engage in Sunk Cost Fallacy, people will often engage in Loss Aversion; if you don't have better data that's the way to bet as to their behavior). But in some individuals this gets transformed from a heuristic to an ideology or the opposing ideology.
For example I don't get A DAMN THING about the anti-nudge people. They seem to be too stupid to understand that EVERYTHING about a form or procedure or default is a choice, so why not design the defaults as best for society -- with "best for society something we debate and vote on if necessary". But anyway, you have have these anti-nudge people around, and they have their ideology; not just a heuristic that "nudge procedures are probably bad" but full-on "anyone who ever has anything nice to say about nudge-related issues is my MORTAL ENEMY". And that seems to be everything about why the article was written by Hreha.
And of course it goes all the way. Scott wrote something about this many years ago:
https://slatestarcodex.com/2014/08/24/the-invisible-nation-reconciling-utilitarianism-and-contractualism/
which I would summarize as "utilitarianism is a good heuristic -- but it's a HEURISTIC". You can either accept that as a heuristic there are cases where it fails, and try to figure out a better understanding of life -- or you can convert utilitarianism into an ideology, and willingly drive over the cliff if that's what your heuristic tells you to do.
Most of our political insanity seems to derive from what I've been saying -- people who can't tell the difference between heuristics and reality (ie when to accept that the quick answers of the heuristic might be invalid/sub-optimal); and people who refuse to accept that sometimes a heuristic is just a heuristic, not a buried ideology.
> > When the two biggest scientists in your field are accused of "systemic misrepresentation", you know you've got a serious problem.
Not necessarily? It just means your field is big enough to have accusers in it.
> There are lots of statistics, like “US police kill about 1000 people a year, and about 10 of those are black, unarmed, and not in the process of doing anything unsympathetic like charging at the cops” ... But somehow those statistics don’t start riots, and George Floyd does.
According to mappingpoliceviolence.org, the US police kill over 30 unarmed black people each year, and over 100 unarmed people of all races.
Anyway, to suggest Identifiable-Victim-effect-is-not-a-thing is obviously silly if you compare X identified victims with X unidentified victims. If the finding is "1 identified victim 'only' feels as bad as X unidentified victims" for X>100, uh, identifying a victim has a huge effect.
> Ad A: A map of the USA. Text describing how millions of single mothers across the country are working hard to support their children, and you should help them.
> Ad B: A picture of a woman. Text describing how this is Mary, a single mother who is working hard to support her children, and you should help her.
But of course, real life charities normally combine both: highlighting one victim, then citing statistics about the large number of victims. Did they not test the usual combination? Odd.
"If some sort of behavioral econ campaign can convince 1.5% of those 90 million Americans to get their vaccines, that’s 1.4 million more vaccinations and, under reasonable assumptions, maybe a few thousand lives saved."
Your math here seems like it might be using the wrong denominator. If 240/330 million Americans are currently vaccinated, then a 1.4% increase should mean either 1.4% of 240 or of 330, depending on what's being measured. In either case, it means the effect is bigger than you said!
Either way, dismissing a 1.4% effect size as generally irrelevant is insane to me.
You should certainly read this piece by a little-known author to understand in what aspects George Floyd differs from a random Mary: https://slatestarcodex.com/2014/12/17/the-toxoplasma-of-rage/
I witnessed the Identifiable Victim Effect firsthand. Friends of my girlfriend were raising money for Zolgensma for their newborn. It’s Pretty Damn Expensive at about $2 million, but it cures the condition which is otherwise debilitating. To my surprise they succeeded, but the very fact they did it indicates that the donors were willing to save the life of one child at a cost which can surely save so many more people.