Many would argue that women's sexual self-determination (including the option to sell nudes) was a major step forward in sexual ethics and an important aspect of ending women's oppression. (I personally would add a couple of caveats, but the point is, "all-time low" depends on your point of view - which is what Scott is aiming at, I suppose.)
Any sermon worth a damn has strong rule of law upholding it, but it still does well to remind people. And yes, morality is basically applied game theory, but admitting this is a weak move in the game, especially if you wish to change the rules! So the whole convoluted mess is sadly inevitable.
I'm with Mike on this one, while we haven't found the exact best moral system (and it would be nigh-impossible to prove), I don't think we should be shy pointing at some alternative systems and saying they're worse. Killing your drinking wife, slavery, and burning of witches are all things I'm happy to assert are wrong, not in my eyes or my my system, but by some universal and external morality.
Sam Harris' book The Moral Landscape has largely shaped my views here, good read.
I am 100% on board with a universally applicable external morality that is objective rather than subjective, but it’s dangerous territory. If you’re going to go there, you’ve got to be prepared to defend it.
I think ‘the violent death rate of humans’ is a pretty weak assertion. It doesn’t automatically defend itself. Is a universe where I lock every human in an invulnerable self-contained box that prevents them from doing anything but dying peacefully of old age the most morally correct universe? Is it even morally acceptable? Or is it a universe where I am a loathsome slaver, even if I’ve saved countless lives?
This sort of hand-waving of what morals Should Be based on passionate intensity about avoiding harm is terribly weak, even if it seems to play well in a soundbite.
I agree - I should have been more clear, I'm with Mike's first point of "it's reasonable to claim some moral systems are superior to others", but I don't agree with the second point that 'violent death rate of humans' is the best metric, though I do think in practice it's a good heuristic. I agree that minimizing harm, while also generally a good heuristic, is always woefully insufficient and sometimes actively misleading as a moral principle.
Do you propose any other simple metric that you think performs better than 'violent death of humans', or a guide to better action superior to 'avoid harming others'?
I think it’s pretty poor form to claim to have discovered the one true objectively correct morality without showing your work, unless your argument is so well-known or traditionally established that you can take for granted that everyone already knows your reasoning. It smacks of skipping steps and I’m calling him out for that.
Especially because I think ‘prevents violent death’ is a very badly-performing metric. You can easily justify arbitrary actions on the basis that you think it might prevent an increase in future violent deaths, including a lesser number of violent deaths. It’s an all-purpose rationalization for any desire of your secret heart that needs justification. It’s also pretty badly irrelevant to most issues of moral corruption in quotidian life: won’t tell you much about whether it’s bad to hate someone or steal a cookie from the jar or return a lost wallet, not unless you engage in pretty tendentious reasoning.
No, if you want a safe and simple heuristic what you want is probably something exclusively deontological: don’t murder, don’t steal.
Setting aside morality, the accuracy of the term "illegitimate" has greatly declined. The birth of a child in vs. out of wedlock used to have very significant legal implications; now, much less so.
The complexity is that being against illegitimacy cashes out as cruelty to illegitimate children, who are surely not responsible for the circumstances of their birth.
True, they unfairly carry that stigma, but the charge of illegitimacy is leveled against the perpetrators, which discourages it. Then the consequences of the perpetrators is bodied forth in the children.
The phrase "illigitimate child" inherently levels the charge against the child, and nobody says "illegitimate parent" or "illegitimate sex". Maybe better to talk about "unprotected sex" instead?
"Absent father" (Or mother, less frequently) directs the blame at the correct target and also hinges on the correct property, i.e. the level of parental support, rather than the formalisation of marriage.
It is. On the other hand, the loss of the stigma against illegitimacy has resulted in a lot more children growing up without both parents, which is even worse.
In 1965, 75% of black babies in the US were born to married parents; now it's about 12%.
>There are tons of things that everyone naturally assumes and take for granted until an autistic science man does this one weird trick of asking himself "wait a minute, is that true actually? let's think about this rationally and do some tests", and then it turns out the world isn't flat, the planets aren't orbiting around Earth, time isn't the same everywhere, dreams are not prophetic, and free will is at best a fraught concept (normies hate him!)
Yes, but Scott said "probably".
Consider the numbers. Autistics are rare. Moral guardians are common. And the Pope is much more likely to endorse moral guardians' denunciations - at least, the half on his side - than an autistic-produced heresy.
This is a study *published in Nature*. Nature might as well be a papal bull; it's probably #1 authoritative source and also has a political agenda. It's *possible* for an autistic heresy against the established thought to be in there... but for a given social psych article tackling common wisdom to be such heresy is a lot less likely than for it to be somebody gleefully burning his enemies.
I think one has to distinguish between the political/economic/cultural and the moral power of Rome. There's no reason why Roman society couldn't be at its peak in a time of moral decline. It used to be thought that Britain's greatest periods were the late 16th and 19th centuries; but many people in both periods regarded themselves as in a time of moral and social decline, and in the Elizabethan era they were probably right. Similarly, the Romans looked right back to their patriarchal roots when they talked about moral decline: they cared about the fall of virtus, not about the increasing number of marble buildings or the quantity of grain imported to feed the burgeoning lumpenproletariat. Perhaps they idealised what was almost a pre-historic period to them; but one can't object to them judging themselves by other metrics than wealth and power.
In fact, I wouldn't be surprised if political/cultural progress and moral decline went hand in hand. That's the traditional view, at any rate.
But in all seriousness, I think you might be a little bit harsh on the poor old Japanese. It doesn't seem impossible that as a country becomes more civilised, genuine virtues might become unnecessary or even unhelpful. The instinct for independence, for instance, and the self-reliance that goes along with it. Or the toughness engendered by poverty and war. Probably to the Japanese and the old Europeans, certainly to the Romans, it appeared that centralisation had brought in a new world where scrounging, flattery, greed and cowardice were the way to get ahead.
The military virtues of the inhabitants of Rome were in decline as the imperial capital became ever more secure and wealthy. Imperial military power tended to come from further out. Trajan, for example, was born in Spain.
<i>There's no reason why Roman society couldn't be at its peak in a time of moral decline. It used to be thought that Britain's greatest periods were the late 16th and 19th centuries; but many people in both periods regarded themselves as in a time of moral and social decline, and in the Elizabethan era they were probably right.</i>
Funnily enough, I was reading a book that touched on this topic just a few days ago. The author blamed it on a change in preaching. Apparently late medieval sermons were mostly exhortations to good behaviour rather than expositions of theology, but then Protestant reformers took control of England and decreed that, from henceforth, sermons were to focus on points of Protestant doctrine rather than morality.
"Like most people I assumed without thinking that doctrine became less important after the Reformation."
And that is fascinating to me. If people were furiously debating about "your theology of baptism is wrong" (and didn't confine it merely to debate), why would you think they didn't care about doctrine? That it was a simple message of "Now the Bible is in the vernacular tongue, read the Bible, and follow your conscience"?
Diarmuid MacCullouch has an entire book on the Reformation (and this is only confining it to Western Europe) where the wars of doctrine are gone into in detail:
Thejre's also his biography of Thomas Cromwell, and the careful path people had to steer around King Henry VIII - both Catholics and various Reformers could find themselves burned at the stake:
"Now, in autumn 1538, Lambert confronted a prominent London evangelical and royal chaplain, John Taylor, with outspoken scepticism about the bodily presence of Christ in eucharistic bread and wine. Taylor called on Robert Barnes to help him defend a real-presence theology which avoided papal error (Barnes was, after all, the most obvious and authentic Lutheran in all England), and he then brought in Cranmer. The Archbishop prudently put Lambert in confinement again – but all in vain: fatally convinced of his own rightness, Lambert appealed to the King to hear his case. This was a disastrous misjudgement.
Henry’s customary inclination to occupy himself with theology when lacking a wife made him take a particular interest in the case, and his mood was currently veering towards the conservative end of his volatile spectrum. That was apparent from a new royal proclamation on religion: a personal public intervention, sidelining his Vice-Gerent, who one might have thought had already produced enough regulation for the Church less than two months before. The proclamation followed up various of Cromwell’s orders, and repeated condemnations of Anabaptism and Becket, but it also imposed censorship on the printing press, including unauthorized versions of the Bible, and it expressly forbade clergy to marry – a reaction to the fact that in southern England a number of clergy were doing just that (not to mention the Archbishop of Canterbury’s wife Margarete, lurking obscurely in one of his palaces in Kent).
Even if we did not possess a draft of this proclamation emended in the King’s own hand, the general shapelessness and theological incoherence of the final version is redolent of brusque royal papering-over of disagreements among his bishops. Worse still for John Lambert, this document was issued on 16 November as part of the theatrics in the most high-profile heresy trial that early Tudor England had seen, with Lambert himself and King Henry as joint and opposed stars of the proceedings. The Supreme Head of the Church of England chose to preside himself over the event in Westminster Hall, symbolically clad in white, with his bishops merely as assistants to undertake the theological detail of prosecution. Cromwell’s only substantial part was to house the condemned prisoner, presumably at The Rolls, before Lambert was taken to the stake at Smithfield on 22 November: the same fate as Forest had suffered there six months before, but for polar-opposite beliefs.
The whole Lambert business hugely embarrassed John Foxe when he wrote it up in Acts and Monuments, given that it implicated some of his chief Protestant heroes in burning a man who looked in retrospect like a good Protestant. Cranmer in particular has come in for plenty of abuse for inconsistency among later writers. Yet the Archbishop’s own theology of the eucharist at the time was opposed to the views of Lambert, who may also have affirmed some real radicalism on infant baptism and the nature of Christ, and the Lutheran princes of Germany expressed no disapproval of the condemnation. Cromwell kept his counsel. Two days later, effectively in a continuation of the same theatre, Bishop Hilsey returned to Paul’s Cross to deliver a definitive exposure and mockery of the Holy Blood of Hailes, this time with the relic on hand as his visual aid – in careful pairing with this symbol of old error, new error was represented by four immigrant Anabaptist prisoners standing beside the pulpit bearing their heretics’ faggots, preparatory to burning at the stake. The occasion was a necessary act of damage limitation for the evangelical establishment in relation to King Henry."
I was under the impression that the Victorians thought they had the right ideas about sexual morality (and social issues like slavery) and they were distressed by the lewdness of earlier culture like Shakespeare and 18th century writers.
I don’t think the Victorians in general saw themselves as in a time of social decline. There were some people who were distressed by the loss of religious faith (like Matthew Arnold I believe).
I guess that's right, my apologies. I was thinking of people like Arnold and Froude, who were both elitists of a sort. So "many people" isn't at all accurate.
If Rome can be in a golden age while also being at the peak of moral degeneracy, that implies that moral degeneracy has nothing to do with whether a country is a good place to live, so maybe people should chill about getting the government to enforce morality.
But perhaps golden ages aren't good places to find oneself in. "May you live in interesting times". Rome at her peak was filthy, miserable and unstable, a little bit like Elizabethan England. A booming population coming to the cities to join the lumpenproletariat, tyrannical government, constant civil war (or in the English case, plots and persecutions). Enough to make any citizen long for the good old days of Cincinnatus, or merrie England...
Ada Palmer argues that the Italian Renaissance wasn't a good time for most people to live. High status people were showing off with the great art we value, but they were showing off because they were competing hard in unsettled times.
Well if they don't think the golden age of Rome was a good time to live, then perhaps people who want moral righteousness should stop talking about how [insert modern problem] caused the fall of the Roman empire.
Or it explains why the country peaked (and then went down) instead of continuing up. We should not expect moral decline to happen when things are at their worst, but at their best. People become complacent when their lives are easy. People try harder when the negative effects of their actions cause them harm.
Livy lived from 59 BCE to 17 CE, the Roman empire hit its furthest extent under Trajan (98-117 CE). There were a hundred years and twelve whole emperors between when Livy lived and when the Roman Empire hit its peak.
Ideas don't last long enough to become traditional moral values if they aren't pro-survival or at least pro-flourishing, which implies that abandoning them is anti-survival and anti-flourishing. So, the richer and more powerful a society is, the more "moral degeneracy" it can tolerate without everything immediately going to pot.
But golden ages don't last forever, and the one we're in now is fundamentally unsustainable.
Hypothesis: Countries are apt to be at their peak when they're using up the moral capital developed in earlier ages. I'm not sure this is true, but it sounds good.
I think that's right. Developing moral capital is slow and boring. How much do people care about the eighteenth century these days? But it was then that Europe built itself up after the chaos of the past two hundred years, in preparation for the much better-remembered industrial civilisation of the nineteenth century.
I wonder whether the perceived “lack of morality” that people feel is associated with the decline of religion (as Nietzsche famously said, God is Dead). Without that external rule of religion, people may still be moral enough not to mug others/do bad things that the poll questions asked, but their actions may not fully align with *another* individual. And so to this other individual, the world is less moral. (Note this other individual doesn’t need to be religious either).
I completely agree that this study tells you more about the researchers’ bias rather than the participants’ bias. There is definitely shoehorning of findings, based on shaky assumptions of what these polls are actually reporting.
I think the point of including the opening quote is that a sense of decline in morality is pretty universal. I am pretty sure Savonarola, Torquemada, and Martin Luther all felt like morality was in steep decline, even if they lived at a time when Christianity was as strong as ever in Europe and their confirmation bias saw evidence for this decline in different places.
But, yes, I think many people think like this. You can hear the decline of religion lamented not just among the usual suspects on the religious right, but from more moderate voices, and even implied in secular discussions about despair and lack of meaning in working class America, and among secularists who think religion is a good idea.
Many people point to a God-shaped hole in our souls and society. They don’t see that they’re really talking about a hole-shaped God – almost infinitely plastic, like putty, to fill any gap.
And that, I think, goes to Scott’s point:
Even a society of perfectly moral, pious individuals would never be a perfectly moral society (by its own standards) for long – in large part because morality is subjective and a moving target with blurry boundaries.
Even within a religion, there’s always going to be a pulsating mismatch between any individual’s or congregation’s fixed definition of morality and that shared by the larger community. You can, for a while, fill the seams with more God-putty and dogma to cover up the poor fit, but you will have to lay it on pretty thick over time (inquisition-style), and eventually you’re bound to get a giant rift (reformation-style).
Insightful - thank you for taking the time to share your thoughts. I wasn't aware that similar sentiments about declining morality existed even in more religious times. I think the final couple of paragraphs that you quote hit the nail on the head.
> I think the point of including the opening quote is that a sense of decline in morality is pretty universal. I am pretty sure Savonarola, Torquemada, and Martin Luther all felt like morality was in steep decline
Alternative hypothesis: the sense of moral decline isn't universal, it just occurs at particular times and places, like first century Rome or sixteenth century Germany or late 20th/early 21st century America.
There's other long periods where morality stays steady (by the standards of one generation going into the next) or may actually increase. Do we see complaints of decreasing morality from, say, the early Victorian period in Britain?
Agreed, history is full of examples of Revivals and Awakenings where most people at the time would say that morality improved with it. I do think complaints of more deline are more common, but there are quite a few times it was perceived to have increased. Here's a list of just Christian ones, and I believe it's true for nearly every religion. https://en.wikipedia.org/wiki/Christian_revival
The early Victorians thought things were as bad as ever or worse. The mid-Victorians were sure that there had been a lot of progress. They were rather pleased with the fact that people were no longer expected to tolerate or encourage serious drunkeness at dinner parties, and that nobody tried to bore them with stories of their sexual conquests under the impression that they would approve or be envious. (Both of these had been characteristic upper-class vices of the Georgian period) They knew how much crime had gone down since the police were established and knew that the lower crime statistics they had included a larger proportion of the crime that was actually happening. And while there was still a big poverty problem at the end of the Victorian period - it was a poverty with cleaner homes, shorter working hours, some education and much less child labour.
My impression is that the whole 1860-1914 period in Britain was characterized by increasing moral optimism.
“They were rather pleased with the fact… that nobody tried to bore them with stories of their sexual conquests under the impression that they would approve or be envious.“
Certainly possible. I don’t know the moral values and attitudes throughout world history well enough to reject that hypothesis.
To be clear, though, I never meant to imply that I thought everyone, everywhere had this sense all the time. So maybe universal was the wrong word. I should have said “very, very, very common” 😉
However, the strict Victorian morality actually strikes me as a typical reaction to a sense of moral decline, though I can’t point to a specific quote to back that up. But by the early 1800s, society was changing very rapidly in every way. Probably too fast for some. The world had been (and was still being) torn apart by revolutions, war, industrialization, and enlightenment ideas and that challenged ancient truths and values – even God. Victorian morality itself was probably not considered moral decline by many (even if some writers seemingly had issues with all the vanity and pretensions), but it was born in a bubble of privilege, surrounded by a world effectively carved into our cultural imagination by Charles Dickens – rife with crime, prostitution, and poverty (which was often considered a moral failing). If Victorian-era elites didn’t didn’t see the road to hell when they looked out their carriage windows, and feel morally superior to their contemporaries, I’d be surprised.
Regardless of which hypothesis is closer to the center of the complex bullseye, however, we probably agree that morality is a moving target to some degree, and that it likely moves at different speeds at different times and places.
>However, the strict Victorian morality actually strikes me as a typical reaction to a sense of moral decline, though I can’t point to a specific quote to back that up
Yes, a reaction to the moral decline of the Georgian era.
>Surrounded by a world effectively carved into our cultural imagination by Charles Dickens – rife with crime, prostitution, and poverty (which was often considered a moral failing)
Dickens' books are what a an effective propagandist response to plead the masses to cease their moral decline looks like.
Moreover, Victorian era was quite long and so was Dickens' career. Dickens started serializing Oliver Twist in 1837, which is the same year queen Victoria acceded to the throne. Twist was sort-of contemporary, describing workhouses set up by the Poor Laws of 1834.
However, say, David Copperfield is published in 1850s and it no longer is a contemporary, but *autobiographical* novel, which is to say, about past. Great Expectations, published in 1860s, is set (or rather begins) during the Napoleonic Wars. For example, by 1860s (if I am reading Wikipedia correctly) the prison ships (featured in the novel) were no longer in use in Britain.
One theory I heard is that Victorian morality represented the rise of bourgeois, middle class values. Aristocrats in the 18th century don’t care about being respectable because their social position doesn’t depend on that, but the bourgeoisie has to prove its respectability, presentability and merit.
In my opinion Dickens’ novels are more a response to the social problems caused by industrialization than to moral decline in the population. Poor people (and unpretentious middle class people) in Dickens are presented as possessing a kind of innate morality, with the exception of the obvious rogues.
"Maud", by Tennyson, leaps to mind as an obvious example of early Victorian complaint about moral decline - the complaint is in character, but I'm think he's speaking at least partly with an authorial voice.
Depends what we are talking about, "Christianity" is/was huge in the West (including Russia). Like, you find Christian modes of thinking in the most unlikely places, like the writings of French revolutionaries, Karl Marx and Ray Kurzweil !
Pretty sure? Do you have sources or quotations to back that up? In the case of Torquemada, he would have witnessed the rise of Protestantism which he would presumably have viewed as a catastrophe, which would not have been confirmation bias.
Yes, I think Torquemada would consider the rise of Protestantism to be a sign of moral decline. I am not sure it actually *was* moral decline, but he would interpret it to be. This is all mindreading, but I would consider that confirmation bias.
"with the gradual relaxation of discipline, morals first gave way, as it were, then sank lower and lower, and finally began the downward plunge which has brought us to the present time, when we can endure neither our vices nor their cure."
Livy was writing right after the Roman republic, which had been stable for centuries, broke down into a century of increasingly violent rioting, followed by increasingly deadly civil wars. The last orgy of violence ended with Rome becoming a monarchy in all but name. If Livy didn't think morals were declining, I would wonder what he was smoking.
"Many people point to a God-shaped hole in our souls and society. They don’t see that they’re really talking about a hole-shaped God – almost infinitely plastic, like putty, to fill any gap."
I don't really think that works as a come-back. We're talking about a hole-shaped God? OK, then, but -- why a hole-shaped *God* -- why not a hole-shaped literature, or music, or drugs, or imaginary friend, or relationship, or approach to sex, or whatever else? Those things are pretty plastic too, and yet it's a *God*-shaped hole that people talk about, and find that they can fill.
One might say, "People talk about a key-shaped hole, but it's actually a hole-shaped key." Both are true -- but what does it prove?
Well, not everyone fills their hole with god. People fill the holes in their lives with all kinds of stuff – some benign (like philanthropy), others toxic (like alcohol or gambling). Few things are quite as malleable as the concept of God, though some may be quite plastic. Also very few people go around advocating explicitly that society would be better off if everyone realized that sex, drugs, and shopping were the answer. So religion is a bit different in those ways. But my point should be valid for any thing you want to fill a hole with:
If you are going to claim there is an X-shaped hole somewhere, you and your audience should have a shared understanding of what the shape of X is. And if you keep changing the shape of X to fit new holes, other holes, and specific nooks and crannies, then it no longer makes sense to describe the shape of the hole in terms of X, but rather point out that X can seemingly be shaped to fit a multitude of different holes.
The issue, then, is that people who think there’s a God-shaped hole somewhere typically think that God is a universal solution, so whatever shape the hole was, they would say it was God-shaped.
And keys and keyholes are not typically plastic, so won’t work at all in this context.
Some fair points, but not *entirely* fair, I think; at least a great many of the people using such language do not *at all* have an infinitely malleable view of God, and are making the claim about a God about whom they also make a rather long list of fairly specific theological claims.
So, I think the key analogy is relevant after all; and, in particular, relevant to show that the mere fact that one can turn around the hole/hole-filler statement says nothing at all about plasticity.
The specific claim usually being made is that God (and only God) can fill the hole in a way that *none of those other things* (sex, music, etc.) *can*. Moreover, the concept of God is somewhat rigid. The claim might be false. Or it might be true. But I don't think it's vacuous in the way you seem to be suggesting.
> the mere fact that one can turn around the hole/hole-filler statement says nothing at all about plasticity.
I agree. The plasticity claim is slightly different, but related. The phrase “hole-shaped god” is meant to suggest that the gods came into existence after the hole, and in response to it. As such, maybe the key analogy works after all, in that keys are often made to fit locks, not the other way around.
> many of the people using such language do not *at all* have an infinitely malleable view of God
On the individual level, I will agree with you that probably no one has an “infinitely” malleable view of their god. It’s just a little squishy and poorly defined around the edges. If you zoom out, however, people in the same community – even the same congregation – will differ about the exact “shape of god”, and most non-zealots’ idea of god is soft enough to accommodate the differences they have to deal with daily. But the further out you zoom, the difference in shape will just get larger. Once you get to a global and historical level, “god” is such a large and amorphous concept that it is nearly infinitely malleable.
> The specific claim usually being made is that God (and only God) can fill the hole in a way that *none of those other things* (sex, music, etc.) *can*.
Yes, I agree that that is often the claim from many religious people. And in the absence of perfect mind-reading, it seems unfalsifiable, even if everyone converted today.
If god doesn’t fill the hole for someone, true believers can always say that those people don’t believe enough or observe correctly. That they just have to try harder.
If something else fills the hole, they can say people are just deceiving themselves or are being led astray by false beliefs.
And if religion does fill the hole, believers can take credit, even if the particular shape is different, because one’s relationship to god is personal.
Most have a particular god in mind, but some will make the claim that religion, not a particular god, fills the hole. And that that is a good thing. I can understand the point, but find it very cynical to suggest that people are better off believing things that are false, than to (learn to) live with uncertainty.
If you believe in a god that fills a hole in you, however, I don’t think this will convince you. Nor am I sure I would want it to. This is not the point on which to change one’s faith.
"But the further out you zoom, the difference in shape will just get larger. Once you get to a global and historical level, “god” is such a large and amorphous concept that it is nearly infinitely malleable."
I would not really agree within the context of (say) western civilization in the past 1500 years. There are of course many sects, accounting for relatively small numbers of people, where that is true; but mainstream orthodox Christianity, Judaism, and Islam have relatively well-defined views of God that change relatively little even across denomination (and even all three agree on many things).
"If god doesn’t fill the hole for someone, true believers can always say that those people don’t believe enough or observe correctly. That they just have to try harder."
I agree that the claim, if made as an evidentiary point, is subject to those weaknesses. I've more often encountered it in a rhetorical or personal frame (somebody recounting their story, etc., or using that story to try to convince others to convert). I think it has strength in those contexts. I also think that it has *some* strength in an evidentiary way, if handled rightly, given the large number of people for whom it is true -- and, again, the fact that it is *God, specifically*, and not those other things, that has found to be hole-filling. (Your points about the ill-definedness of God would, again, apply just as well, and in fact much better, to many of those other things, and yet there are relatively few people making those claims for those things.)
"I can understand the point, but find it very cynical to suggest that people are better off believing things that are false, than to (learn to) live with uncertainty."
On this we agree. Hole-filling by a false god is not to be desired.
I think I have expressed what I wanted to and will not necessarily reply again unless you raise new points, but of please feel free to reply again to anything I said, and in any case thank you for the discussion.
(I noticed it was retracted, but then resubmitted, and I havent read it anyway, this is just to show that the god os societal regulator is controversial). The point is, if religion is really important for complex societies, then perhaps those who claimed that atheism will lead to societal decline and ills were not completely unjustified. This is not to say that religion is really necessary, but perhaps we need some replacement instead of going world-view neutral.
Yeah there’s a mutual comprehensibility thing that’s discounted here. Like anything outside of my one specific culture I grew up on always throws me for a loop.
For instance paying other men to fix your house is very haram and something akin to cuckoldry and I know it’s just a totally normal thing that people do and I have no justification for the feeling whatsoever.
Yeah when we had larger widespread religious experiences it kind of syncs everyone up. Wokism is a large sort of moral approach but that stuff seems bad to me too.
If only people could abide by the One True Way which is just coincidentally the one that I happened to be raised in.
You know I’m embarrassed to say I never considered this loop hole and while I have to admit that I don’t have the same level of revulsion to it would feel like I was cheating on my wife somehow.
This is very interesting. When my dad’s new immigrant neighbor was engaging in some dangerous DIY activities we tried to stop him but it was almost as if he’d rather risk killing himself than ask for help. A friend of mine said “Oh yeah, this is a cultural thing and not just in my culture. Everyone I know with an immigrant dad has had to talk him out of fixing things when he had no idea what he was doing.” I thought it was about saving money but didn’t realize there might be more to it.
In his book *Big Gods*, Ara Norenzayan makes a pretty good case for religions predicated on watchful gods concerned with human morality as necessary for large-scale civilization, and invocations of those gods' watchfulness keeping humans a bit more honest, allowing higher trust modes of organization - Norenzayan uses the example of an international set of Muslim banking organizations.
A key part of these religions are difficult to fake, public signals of fealty - the daily prayers of Islam being a great example
In that kind of light, even if people are fundamentally kind, if Norenzayan's thesis is true, a decline in religiousness and a decline in morality more broadly would more or less be aspects of the same thing.
Nothing in Social Science makes sense except when you understand that (especially in the Anglo-world) Social Science after WW2 became the self-appointed religious police of society. They were to be the guardians of morality and thought (and to punish those who stepped out of line) so that something like “it” would never happen again.
They are still at this, still playing that same role. Mostly in obvious fashions, but of course the part their intellectuals enjoy the most is finding excuses to paper over any inconvenient incompatibilities between either different religious texts, or religious text and reality, ala Summa Theologica.
Point is, the priests of 0CE, of 1000CE, and of 2000CE both (mostly) have no idea what they are actually doing; but they are doing the same thing. But you won’t get this if you think that the social role of priest in 2000CE is played but the guy who graduated from Fuller Seminary, rather than the girl who graduated from Harvard Social Studies.
It remains an open question as to whether these 2000CE priests are *also* capable of creating durable social value. Like their 1000CE Christian equivalents? Or like their equivalents in say meso-America or Carthage? I’m skeptical of value creation, but don’t have time here to push this further.
Lee Kuan Yew (I think? - cant find it now) told a story about visiting London after the war and finding a newspaper stand in a train station with no one attending it. The way it worked was you took a paper from the pile and put your money in the pot, and if there was theft of the money sometimes it was low enough that this worked out as a way to run a business.
The idea of trying that today is ludicrous. Since I first heard it I’ve always thought of this story as an kind of existence proof for the idea that _something_ has changed.
My kids’ school made a big deal at orientation that while you *can* put a lock on your locker, no one does. And they’ve never had to investigate a theft. The students literally told the principal when she first started “that isn’t how we do things here.”
Some of them get robbed though. One of our favorite farm stands now has a bank grade money box with a one way chute. A local chicken farmer used to notice a dozen eggs getting stolen here and there towards the end of the month, but continued with the honor system until someone stole all the eggs and the cash box.
For what it's worth these are still quite common where I live (Vienna, Austria), where you find newspapers in bags attached to street signs with a similar "honesty pot" above them.
That said, I've always assumed that this works because newspapers make money from adverts, not only sales, and any copy that goes missing is a relatively small capital loss but still counts as a copy "in circulation" for advertising purposes.
I think, even if lax security does imply moral fibre (and it does seem intuitively likely), using that argument can lead to poor security. Things should be secure just in case, even if an attack is unlikely currently.
I'm talking about security in general, not just home security. Particularly credit cards (see demost_'s comment).
Security has costs. Locking and unlocking a door takes a little time repeatedly. Losing a key or other similar failures takes more time and it's unpredictable.
People in cold climates seem less likely to lock their doors, presumably because they don't want to leave their neighbors outside to freeze.
When I lived in Newark, Delaware (a medium-sized college town), there was a while when I didn't lock my door. The weird thing is that if I mentioned it, people would get angry with me.
My theory is that they didn't want to hear me complaining if I got burglarized, but this is only a guess.
When Denny Hastert became Speaker of the House around 1998, and thus third in line for the Presidency, he then had to keep nuclear war secrets in his house. The security services in charge of these documents requested copies of his house keys (he lived 50 miles outside of Chicago). It turned out he didn't have any keys because he didn't have any locks on his doors.
Then he turned out to be a gay child molester.
But nobody remembers that because he was so boring.
The past is really complicated, so it's not surprising that survey questions about it aren't very reliable in what people respond.
By "anecdote" I don't mean to imply that it's untrue, just that it's a sort of thing that's hard to quantify and make a pretty graph in service of some abstract point.
I have a friend who lives in a big coastal US city who sleeps in the carriage house out back but has the kitchen in the main house. They have ended up leaving their house unlocked all the time because it’s more convenient than constantly unlocking and locking. They’re in a very walkable neighborhood, easy biking distance from downtown.
I suspect it would actually be totally fine for most people to leave their door unlocked.
I'm pretty good at remembering/estimating dates of history I lived through. When I started reading grown-up magazines around 1967, there was a public service campaign to tell people that due to the recent rise in auto thefts, they shouldn't leave their car keys in the ignition any more. After that came reminders to lock your car. My father, a good neighbor, made it a practice when walking down the street that if he saw a parked car with its lights on, he'd open it and turn off the lights. But over time in the first half of the 1970s, he wasn't able to do that anymore because the vast fraction of cars came to be locked. I think the last time he was able to turn off lights was around 1972-1974.
So, yes, there was less car crime before the Late Sixties. On the other hand, I suspect breaking car windows peaked around, say, the 1980s. But car theft has come roaring back in this decade. So, it's all very complicated. Unless you follow crime statistics closely like I do, these various trends will be a blur, so you will just pick one simplification to answer the survey question.
When I learned as a young man how credit cards work, I was shocked. I am still shocked until today. You just have a number (or two), and everyone who knows this number can pay from your account. And you give this number freely away at 100s of restaurants and other places (nowadays to online shops), to complete strangers.
I find this pretty much as ludicrous as the newspaper stand. Actually, more ludicrous.
Related to credit cards but not terribly related the the discussion on morality ...
In the US very early on as credit cards were being rolled out the US congress passed a law that, effectively, made it such that the merchants and banks absorbed most or all of credit card fraud. Assuming that the credit card holder actually looked at their bill and informed the credit card company of the fraudulent charges then (and this was pre-WWW so you had to wait for your paper bill to arrive for the month).
The credit card companies complained A LOT about this, but it turned out that this removed a lot of customer hesitation about owning a credit card. Which led to huge growth in the credit card industry. Which eventually led to huge profits for the banks even after the fraud losses [things are more competitive now, but there was a time when banks put their promising young executives in the credit card divisions to fast track their careers ...].
And it mostly worked fine for decades (though, obviously, not without some fraud).
Over time and as things get more automated and anonymous the credit card folks are having to up their security game. But it did work as you described for a long time, partially because of where the loss was placed.
>The credit card companies complained A LOT about this, but it turned out that this removed a lot of customer hesitation about owning a credit card. Which led to huge growth in the credit card industry. Which eventually led to huge profits for the banks even after the fraud losses
This sort of thing is why I'm skeptical of anti-regulationism. Here you have the ideal situation in the prisoner's dilemma, a reliable third party enforcing "cooperate", and short-sighted profit-seeking entities still overwhelmingly try to defect. They choose short-term profit over actions that will actually make their product more valuable to the customer (and more profitable to them).
Randian objectivism doesn't work, selfishness is always going to fall to Moloch.
Hmm, it's not really an ideal prisoner's dilemma because the credit card companies had no idea that taking on all the fraud losses would be profitable in the long run. An ideal prisoner's dilemma has certainty that picking the "cooperate" option will lead to both party's benefiting.
In one of Bruce Schneier's books he talks about how the UK didn't get that law, and the credit card companies mostly put the liability on customers by arguing that if someone stole your credit card number that must mean you were careless, because how else could anyone possibly get it.
And as a result the UK had far more fraud, because customers had much less ability to prevent fraud than the credit card companies did, and the companies didn't try very hard when they weren't the ones facing liability.
Even more ludicrous is how checks work. Your supposedly secret bank account number, which you should never tell anyone, because they could withdraw money from your account, is written on the front of every check.
I can for sure say I’ve seen people leave money on a table or bar in France, various states in the Caribbean, and places in Central America. Can’t recall for some other places in Europe or SE Asia, but I’d be really surprised if it was unheard of.
1. Depends very much on where you are, and also where you grew up. I grew up in Toronto and I always lock the door. A friend grew up in a town in New Brunswick and he never locked his door even when he moved to Toronto. I had a hard time believing he never got robbed.
2. When we were still using cash (before the late unpleasantness) I would do this, but I did make sure the waiter knew we were leaving - making eye contact etc.
If 99% of people lock their doors, it probably doesn't pay to be a burglar who goes around checking to see if doors are unlocked.
Similarly, if a lot of homeowners are armed, being a home invader probably isn't a good career path. So, homeowners who don't own guns benefit from other people who do.
Ive lived all over the US and neither of those strike me as crazy. Not locking doors is more of a personal affectation (I just think the risk is worth the convenience, I don't imagine many thieves walking around and just testing doors) but everyone leaves cash tips unattended, for example
Many restaurants these days leave food or drink sitting out on a shelf where anybody can walk in off the street and access it. Amazingly, they expect that only people who have already sent them money will take food, and that those people will voluntarily limit what they take in relation to what they paid!
Even more amazing is that this actually seems to work, at least where I live (dense urban environment) I never hear complaints about theft and pretty much all the restaurants that take online pick-up orders do this. Presumably some would stop if theft was a problem. Other types of apparently riskier theft do happen (e.g. pick-pocketing).
Modern newspaper vending boxes aren't much more secure. Sure you are forced to put money in to open it, but once you do you have access to all the copies of the paper and are trusted to take only one.
I think the crazy thing about the honesty tins is more that someone could raid the tin than that someone could take the papers without paying - that does depend on the design of the tin though
Something I've seen homeless people do from time to time. Or just give them away on the hope of a "donation" in return. Also seen alocal theater company place advertisements for their upcoming show in each copy of the newspaper instead of paying for advertising space. That was kind of weird.
Other times I'll see people jimmy it open or take all the papers out and leave them on top so future people don't have to pay. Seen people take a stack of papers home with them to wrap glass for moving.
None of these are very common, and clearly none of these is enough of a negative impact to be worth improving security over.
I recently read an anecdote about a guy selling sports cards on ebay and while most paid a few 'didn't receive their purchase' and he was forced to refund.
Point is it worked out for him overall. Not the same as an "honesty pot" but a modern similarity.
I've known honest eBay sellers (antiques, vintage and specialty electronic equipment) who had to get out of the game entirely due to buyer fraud: "arrived broken", "not as described", free return of substitute goods, etc. As a response to fraudulent sellers, and to promote buyer confidence, eBay now sides so firmly with the buyer that sellers are left in a very vulnerable position.
I can see how high value, general interest items would quickly become unprofitable for sale on ebay. Seems like specific collector and/or items of niche interest would have better luck by reasoning(in-group mentality) and my one anecdote.
[Philosophical tangent to follow]
It seems like the human tendency to group and create urban areas that offer more opportunities (economic, dating, creative, etc.) also tend to break up communities which by definition are people who know each other and interdependent socially as well as economically. This leads to moral decline(steering back on topic) and collapse, in cycle.
This still happens today. I went camping in the US literally this weekend, and there was no attendant at the entrance. To get a campsite I just filled out a paper form that was kept in a zip lock bag on a table, and put my money in an envelope.
Not really related, but my father mentioned that his least favorite part of his newspaper route (40s-into-50s, thrown from a bike; eventually when he had a pretty big route, he went to a bank and got a $300 loan for a car, at age fourteen ;-)) was having to go by and ask people for their money. He even on his own dime bought stamps and addressed envelopes hoping they would just mail it to him. (He had to pay for the papers, you see.) One of those lessons only a job teaches perhaps. Don't be owing money - pay what you owe quickly and don't make people ask you twice or 3x. Certainly he is this way. I wouldn't call him a moral man though.
(a) In London right after both WWI and WWII they had huge labor shortages because so many men had not come back from the war. So during those few-year periods they had no choice but to try all sorts of no-labor-needed arrangements as temporary expedients.
(b) Where I live, in a large U.S. city, that sort of practice (if not the exact example cause there are no newspaper stands anymore) is commonplace. E.g. I paid for my lunch just now by leaving a couple of bills on the lunchroom counter and walking out. Etc.
Does anyone know if this has been tested recently? My intuition is that this probably depends a lot on where you live, but in most places in most parts of the world, you could still have a vending stand like that (maybe not newspaper though, since nobody buys newspapers anymore). But this is an easily testable claim.
I’ve been told by friends who live in Singapore that it is common practice to claim your table at a food court by leaving your phone on the table while you’re at the stalls ordering food.
I am reminded of this from Mark Twain: "When I was a boy of fourteen, my father was so ignorant I could hardly stand to have the old man around. But when I got to be twenty-one, I was astonished at how much he had learned in seven years."
Our values & ideas always go from indistinct and adaptable when we are young to concrete and rigid as we grow. Does not matter whether it is moral values or technological concepts or intelligence etc., I find we flip from being accepting of other views when we are young to judging other views against our own as we grow older.
There were times I used to wonder "Is it me or the world around me which is changing?" But then I realised, it is both, "I" and "the world" around me mutates, it is just that I mutated with the world when I was young and hence did not see the difference, but as I grew old I fell out of sync with the mutation which caused me to see the change & judge it.
So, irrespective of what the actual is, we always think that things in our childhood were much better and has degraded over time.
The other effect is that when you are young, and hear about <this wonderful new thing> you are hopeful and credulous enough to believe that this really is the way of the future, and well worth the investment of your time in learning. Having been around a while, you discover that a great many wonderful new things are merely the latest fashion. In a decade they will be forgotten -- or it will be clear that this is more than a fad, and you can start your learning then.
Definitely I think as you get older, you look back at childhood and early youth with rose-tinted glasses. People were friendlier, the world was nicer, things were just better.
Which is why *of course* asking people 'do you think morals/ the taste of tomatoes has declined since you were young?' is so subjective and going to get "between when I was born and now, things have definitely gone downhill".
Because the world *does* change, and we get a lot of experiences bad as well as good, and things seemed to be simpler when we were kids because we had much less to worry about in a limited sense ('is Pete my friend, does he like me or have it in for me?' doesn't have as much heft as 'Pete is my boss, does he like me or have it in for me?' when it comes to messing up your life).
I think that, in general, people are also nicer to children. Most people have much higher moral standards for interacting with children, so it might be that people really are more moral (to us) when we're young and have the protection of youth.
Bullshit, people aren't nicer to children. In fact, adults treat children much, much worse than they do eachother. If an adult tried to forcibly imprison someone else for years (even outside of school they forced activities and completely control the schedule; I wished I could have played soccer with the other kids but nobody was allowed to leave their own backyard except for a very rare playdate) then violent self-defense would be considered justified. The adult legal system doesn't tell victims of assault that they just need to interact more with their abuser and force it to happen more. And a lot more stuff like this.
On top of it all, the US legal system forcibly returns children who run away from home unless they experienced a short list of specific abuses. I'm not sure what the situation is like in other countries, but adults get nothing like this.
"When the business man rebukes the idealism of his office-boy, it is commonly in some such speech as this: "Ah, yes, when one is young, one has these ideals in the abstract and these castles in the air; but in middle age they all break up like clouds, and one comes down to a belief in practical politics, to using the machinery one has and getting on with the world as it is." Thus, at least, venerable and philanthropic old men now in their honoured graves used to talk to me when I was a boy. But since then I have grown up and have discovered that these philanthropic old men were telling lies. What has really happened is exactly the opposite of what they said would happen. They said that I should lose my ideals and begin to believe in the methods of practical politicians. Now, I have not lost my ideals in the least; my faith in fundamentals is exactly what it always was. What I have lost is my old childlike faith in practical politics. I am still as much concerned as ever about the Battle of Armageddon; but I am not so much concerned about the General Election. As a babe I leapt up on my mother's knee at the mere mention of it. No; the vision is always solid and reliable. The vision is always a fact. It is the reality that is often a fraud. As much as I ever did, more than I ever did, I believe in Liberalism. But there was a rosy time of innocence when I believed in Liberals."
This reminds me of the saying, "Women marry men expecting them to change. Men marry women expecting them not to." In reality, both are right, and both are wrong, since in some ways they change and in others they stay the same. Perception is sometimes a strange thing.
Nice post. By the way, violent crime reported is more than double since 1960, but actual murders are up only 20%. That difference may be revealing.
Measures of "violent crime" can pick up changes in whether people feel it's worthwhile to report crime to police, in which case more reports can sometimes match to *less* actual crime.
For this reason, historians prefer to track homicides over time, as murder is almost always reported.
When I see murders up 1.2x, and a 2.5x increase of "violent crimes," one immediate hypothesis is "actually, there aren't 2.5x more violent crimes, people these days just report more of the crime that happens to police."
An exact opposite hypothesis would be "the increase in violent crime is real, and there would be a lot more murders, too, except that modern medicine keeps the assault victims alive and it's not murder if you don't die."
A third sideways hypothesis would be "there aren't even more attempted murders, just more handguns to attack with instead of knives, so the same level of violence as in 1960 is getting 20% more people actually killed."
Which is right? I don't know. But I do know the chart can't tell you!
So "beware the man of one chart" just like you would "beware the man of one study."
Confounder: it's way harder to do anything with a stolen car today. Parts way easier to track, etc. I'll bet if you steal a Tesla, you will find it extremely difficult to get even a tiny sum for it.
Criminals (for the most part) don't steal cars anymore, they just break windows and take whatever valuables are there that can be taken.
Car theft in most American cities has skyrocketed in the past few years. It's doubled in the past year in the city where I live (Memphis). It's mostly Kias and Hyundais being targeted, because they didn't have immobilizers prior to 2021, and there were social media instructional videos on how to steal them.
My sense (in my area) is that in most of those thefts the car gets joy-ridden, partied in, then used for a robbery getaway and abandoned. I don't *think* there's a big market for hot Kia Souls or parts.
A constant lament on social media locally is people's tools being stolen out of their trucks, even when they have a cover or box.
Down here they steal catalytic converters a lot - there's a place on the interstate with a big sign "We Buy Catalytic Converters!" which almost seems like it might not be legit. And then sometimes people come out their houses in the morning and find their truck up on blocks, the wheels gone.
A lot of times the stolen cars in this area seem to be perhaps something like a gang initiation? For which I suppose we're supposed to be grateful that it was so mild. The vehicle will be found in the immigrant area that is a sort of Bermuda Triangle as far as solving crime goes, evidence of some sort of party like Flamin' Hot cheetos all over the carpet. The cops do their job if they get the vehicle returned to you. There is maybe not much effort to figure out who stole the car, when it's not part of an actual theft ring.
A general problem about survey questions about "the past" is that there are a lot of different alternatives for what is "the past." For example, homicides, auto thefts, and traffic fatalities in the 2020s are way up over the 2010s. On the other hand, lately Biden supporters have taken to arguing that early 2023 appears to have been less chaotic than 2021, so what are you worrying about?
Similarly, combat deaths in Eastern Europe are way down compared to the Battle of Stalingrad.
Murder is usually considered to have increased less than expected because medical treatment has improved and many would-be murders are now just attempted murders.
I agree that prisoner-on-prisoner crime isn't getting counted, and neither is guard on prisoner. I'm guessing that if a prisoner attacks a guard, that *might* be counted.
So, while this is obviously true, I've now seen this justification used enough to raise the question: Aren't attempted murders *also* pretty consistently reported? Shouldn't we be able to correct for that effect? Has anyone attempted to do so?
I'm not sure. My guess would be that if X assaults Y, it's hard to tell if it was an "attempted murder" or just a regular assault, so the natural category is overall violent crime statistics.
Perhaps there's something interesting to do with trying to establish base rates of deaths from assaults across different contexts where homicide isn't a plausible motive in some contexts.
An evidenced improvement in society as the result of environmental variables as opposed to the broad acceptance of a population that ending another's life for one's material or emotional gain is morally wrong, is more dependable as it generates a lasting 'vibe.' If, for instance, stricter policing is the result, this can change with a new mayor in a matter of weeks.
Forgive me if I'm misunderstanding, but are you saying fewer murders per capita is worse in the opinion of people who think having lots of people is making the world worse?
I'm simply saying that an improvement in the murder rate is better as the result of some sustaining moral improvement such as examples of leadership that inspire personal change(1) in a population than an unchosen/forced external factor as stricter rules/policing which improves behavior while in effect.
Apologies if I did not address your original point.
(1)"The history of Europe during the later Middle Ages and Renaissance is largely a history of the social confusions that arises when large numbers of those who should be seers abandon spiritual authority in favour of money and political power. And contemporary history is the hideous record of what happens when political bosses, businessmen or class-conscious proletarians assume the Brahman’s function of formulating a philosophy of life; when usurers dictate policy and debate the issues of war and peace; and when the warrior’s caste duty is imposed on all and sundry, regardless of psycho-physical make-up and vocation.”
I'm too eager to get this comment out to really check thoroughly, first... (edit: well that bit me — dunno how many damn words were autocorrected to the wrong goddamn form, initially...)
...but it looks to me like you're right about the statistical methods used. That's not how I'd do it, I think.
The "Bayesian" method they use is interesting, but as the documentation on it (that is, on the "Bayesian RoPE" method using the Highest [Probability] Density Interval™) takes ESPECIAL CARE! to caution the would-be user that defining the ROPE ("region of practical equivalence" — the range wherein the parameter-of-interest's values are "negligible" / "of negligible magnitude") offers an unfortunate latitude to the researcher.
That is: both the range itself *and* the choice of units are left to the user's judgment, with no clear Best And Objective Way to use as a bright-line distinction — and the choice of units can determine whether the *same exact data* is counted as confirming the null or no.
Man, this is just *made* for Garden-of-Forking-Paths–ing (GoFPing?) until you've got a satisfying conclusion!
That's in theory, anyway. In this particular case, maybe they actually did pick common-sensical values for the "negligible effect" range (RoPE)
Agreed in the sense that Scott's quote of "ROPE" leaves out the most important part of it -- what was the researchers' definition for region of practical equivalence? Ctrl-F:ing the article, they define it in the analysis section, as "±0.1 standard deviations". That is, x % HDI within ROPE means that x% of the highest density interval of the Bayesian posterior estimate was within 0.1 standard deviations of ... ahem, I am not actually sure of what? I wish authors would explain what they did in detail, at least in supplements or something.
Also, while we are talking about statistics, for question 78 in Table S3, it is not the r^2 that is -0.006 but b (unstandardized regression coefficient, probably?) . r^2 is 0.008.
>(think I’m strawmanning? Read the last paragraph of the Discussion section)
My god, you're not kidding. Quoting it for anyone else:
>>The United States faces many well-documented problems, from climate change and terrorism to racial injustice and economic inequality—and yet, most US Americans believe their government should devote scarce resources to reversing an imaginary trend.
...Also, I'm surprised you didn't link this back to "Social Psychology is a Flamethrower"; this is clearly some choice napalm.
Wow, I was only a little surprised to hear that the Discussion was like that, I'm a fair bit more surprised to see that it was totally like that without the slightest reservation
Yeah my respect for Nature certainly took a big hit. Too much time is spent telling each other how terrible the other side is. And now it has spread to science journals. (Scientific American, followed a similar path. But ~30 years ago.)
Coming right on the back of their editorial blasting people for caring about AI x risk while making no attempt to refute AI x risk. It's ideologically captured by the Successor Ideology.
That's not even the worse sentence imo. Also from the discussion section. " If low morality is a cause for concern, then declining morality may be a veritable call to arms, and leaders who promise to halt that illusory slide—to “make America great again”, as it were—may have outsized appeal." Completely broadcasting their disdain for Trump. I don't like him either, but I wouldn't mention that in a scientific paper. And people wonder why Trumpers don't trust scientists.
Huh, didn't notice that. I mean, it's reasonably-accurate as a description of Trumpists (I think most of them would agree in principle that declining morality *is* part of their motivation to vote for Trump, it's just that they don't think it's "illusory") but yeah, pretty explicit.
It's abundantly clear to those who are not liberal elites that their opinions and beliefs are not valued by those that are. It goes back long before Hillary Clinton called them a "basket of deplorables" or Obama disparaged how "They get bitter, they cling to guns or religion or antipathy to people who aren't like them or anti-immigrant sentiment or anti-trade sentiment as a way to explain their frustrations."
Um....what? Scott claims that this paragraph says:
1. conservatives are wrong
2. conservatives' fears are fake
3. we should refocus conservatives fears onto problems liberals care about
But the paragraph
1/2. Does not single out conservatives as affected by this bias, let alone say "conservatives are wrong" or "conservatives' fears are fake" in general
3. Does not exclusively mention left-coded problems, as terrorism is traditionally right-coded. I guess if you interpret it as referring to the increase in far-right terrorism, then okay I see what you mean, but I wouldn't be a fan of terrorism even if "my tribe" were doing it. In fact I'm pretty sure I would by bothered by "my tribe" killing random people. And it doesn't feel that long ago that I was hearing about the evils of ISIS all the time (especially from the right). And I agreed, ISIS were nasty bastards that we should Do Something About, though by today's standards it seems odd that it was right-coded. This article is from 2018, does it not overlap that period?
And yeah it also says '“make America great again”, as it were' but is that not the kind of response you would expect from people who perceive moral decay? Like, if someone has a bug up their ass about "the moral decay of America" and therefore wants to “make America great again”, does it not make sense to regard this as a mistake if in fact there is no moral decay?
Maybe Scott's criticism would've rung truer if he put it at the bottom after his main criticisms, rather than at the top where I'd fact-check before reading on.
>1/2. Does not single out conservatives as affected by this bias, let alone say "conservatives are wrong" or "conservatives' fears are fake" in general
Worries about moral decline are almost the defining conservative issue. Certainly, I'd say it's the biggest conservative issue these days.
>3. Does not exclusively mention left-coded problems, as terrorism is traditionally right-coded.
I noticed that too, but...
>This article is from 2018, does it not overlap that period?
>>About this article
>>Received 11 July 2022
>>Accepted 26 April 2023
>>Published 07 June 2023
...so it is indeed post-Jan-6. I'd say that one's close to neutral, and I'm not even sure which way it's leaning.
>Like, if someone has a bug up their ass about "the moral decay of America" and therefore wants to “make America great again”, does it not make sense to regard this as a mistake if in fact there is no moral decay?
Like I said above, I indeed think that logic is sound. However, I'm suspicious (and certainly the Trumpists would be suspicious) that this Nature article is not in fact following that logic but exploiting it, starting from the conclusion that people shouldn't vote MAGA and manufacturing evidence that moral decay is fake in order to convince people not to.
(I'm not quite making the accusation of the experimenters lying, more a case of "experimenter effect" plus publication bias on Nature's part.)
If you think of morality as a magnitude (which I think is the right way of thinking about it) then any move away from what you think of is right is a decline. You can't super not-cheat-on-your-wife. So its a bit of a silly question. If morality hasn't declined then it hasn't changed. But it clearly has changed.
I don't think this is true as to society as a whole, which could shift from "lots of people cheat on their spouses" to "few people cheat on their spouses."
To biological creatures, evolutionary pressures are God. Therefore, since evolutionary pressures are inconsistent across place and time, God is always a creature of the moment, enforcing his will most brutally on yesterday's favorite children. The fundamental act of morality is patricide, our parents, demons.
Evolutionary pressures certainly exist, but they aren't God. Evolution functions more in a format of "spray everything at the wall and see what sticks" than "this has a chance to improve survival so let's try it out". Survival of the fittest only works sometimes.
Like a lot of my comments this one was sort of meta satirical, which may not even be a thing, so apologies for any confusion. Unpacking the layers I am basically just taking a common authoritarian "Red in tooth and nail" social Darwinism and making a slight change in reasoning to say we should use that reasoning to kill such people themselves. It's a weird joke that's packed in pretty tightly and that nobody ever understands so I dunno why I keep making it.
I find it quite funny but you may want to work on subtly signaling that it’s humor because otherwise you just come across as a garden-variety internet maniac.
Insofar as the direction of travel for morality is driven by young people, an aging society would feel increasingly alienated from the moral trajectory simply on account of ratios of young to old.
This is a great point. I have thought of this before but never quite crystalized it fully. The slow aging of the population is going to create more uncomfortableness with the "normal" rate of change. And I think many suspect the rate of change is above normal right now.
Expanding on what you briefly mentioned about wealth, I think you could divide morality into roughly two categories:
A) "be a good Roman" morality- meaning acting based on feelings of duty/ honor to do hard/ badass/ valuable things that benefit others/ make you look cool. This is more Nietszchean.
B) "don't be a nazi" morality- meaning being careful to not accidentally act on evil morals and serve some super villain dutifully in a way you think is righteous but actually is totally not. This is more like Socrates or something.
Morals in society during times of peace/ prosperity will generally be drifting in the "don't be a Nazi" direction for morality. Judge not that ye be not judged, stop to think before marching off to war, etc. I'd argue we're a lot better at "don't be a Nazi" morality in 2023 than we were in 1949, in spite of all the complaints about liberals being speech Nazis/ alt right people being literal Nazis. People in general are probably much more open minded, accepting, not racist, not sexist, empathic, and humble with regards to morality than they were then.
But "don't be a Nazi" morality is negatively correlated with "be a good Roman" morality. So since 1949 we've also gotten significantly less good at what you call "1940s morality." And while 1940s morality and Livy's model for morality are much different, they both have in common that they are calling people to behave according to specific norms, calling on people to live up to a specific model for good citizenship that involves a lot more than just "being chill." During times of war or economic hardship, "be a good Roman" morality probably improves, while "don't be a Nazi" morality probably erodes.
So this seemingly constant attitude that morality is in decline could on some level simply reflect this conflict between two opposing things that we happen to call "morality." Most of the time, civilizations slowly get more prosperous, leading to things shifting in a "don't be a Nazi" direction. But we focus on the negative and see how everyone is being less good Romans- being more directionless, apathetic, etc. Then a war breaks out and patriotic fervor explodes out of nowhere, labor force participation rates soar, the martial virtues that Livy loves so much increase. But as this happens, we can't help but notice that humans are murdering eachother like animals, and all kinds of other atrocities are occurring. Clearly morality must be in decline, it's the end of the world. So perhaps the only time societies in general feel like morality is improving is right after a victorious war, when the soldiers come home and stop killing people, taking their disciplined habits and conformist haircuts with them. At these moments both type A morality and type B morality can seemingly both be at new heights at the same time. But most of the time it will seem like the world is getting either more Nazi or more apathetic.
But the 'be a good Roman' morality isn't Nietzschean--they were acting on behalf of the Roman state.
I think you've got a pretty good description of the USA in the past 80 years as you have a transition from God, Family, and Country to Never Hurt Marginalized People, but lots of other times and places returning soldiers did all kinds of awful things. Were demobilized war veterans in Weimar Germany a moral force?
Thanks for your thoughts. I meant Nietszchean in the broader sense of valuing absolute standards rather than valuing relative things like mercy, kindness, sympathy, etc. But yeah, Nietszche was pretty individualistic/ pretty anti-state (at least in Thus Spake Zarathustra where he basically called states false gods and parasites) so describing this "be a good Roman" idea as Nietszchean is perhaps misleading.
Yeah, returning soldiers often do awful things. I wasn't even arguing that triumphant returning soldiers lead to both kinds of morality are actually improving (like in the obvious example of 1950s American culture) but that this triumphant return can create a general feeling of moral progress that might offset that more common attitude of moral decay. There were obviously some problems with 1950s American morality.
Fighting against Nazis doesn't implicitly make you good at "not being a Nazi." It is easy to imagine two groups of literal Nazis with very similar values still fighting each other for dominance in accordance with their Social Darwinist value system. Anyways, I was meaning to open up a discussion contrasting a morality that focuses on martial virtues/ conformity versus a morality that focuses on cautious thoughtfulness/ mercy. I had no intention of focusing on the real Nazis and the real people who defeated them.
"Don't be a Nazi" is ambiguous-- it's usually taken (in the west, not in Russia) to mean "don't even get near committing a holocaust", but Nazi expansionism killed a lot more people.
Except "holocaust" these days encompasses refusal to accede to pronoun demands. Nazi has long become a generic insult for a political enemy, whether in Russia or in the West.
"Transgender genocide is a term used by some scholars and activists to describe an elevated level of systematic discrimination and violence against transgender people."
I don't think Socrates is actually a good representative of that morality. He fought in war rather than worrying about whether it was morally acceptable to do so, and part of why he was so controversial is that some of his students had been involved in a tyrannical takeover of Athens. It's worth reading Willmoore Kendall on how moderns use Socrates as a symbol without understanding what he actually believed and why he drank the hemlock.
Sure, I'm not claiming to be any kind of expert on Socrates. I guess I mean "Socrates as represented by Plato." Sure he fought in a war when he was younger, but that wasn't unusual for Greeks during his day. The simple statement "I am wise because I know that I know nothing" is very much representative of this "don't be a Nazi" moral idea- thoughtful, introspective, etc. Plato's Cave (perhaps better called "Socrates' cave" is another obvious example- the focus being on enlightenment and understanding, not obedience). Buddhism, Jesus' teachings in the New Testament, and many other ideological systems favor this thoughtful, fuzzy, critical, enlightenment-focused approach to morality over more traditional "be a good Roman" kinds of ethical systems. And whether or not Socrates 100% fits this "don't be a Nazi" morality isn't the focus of my argument, I'm just trying to highlight a dichotomy that seems to exist. Thanks for the interesting link, cheers-
This is a tangent but The Republic is an introspective work more so than a political treatise. This is mentioned no less than 3 times explicitly and made reference to in more minor ways throughout, and I feel like I'm taking crazy pills whenever people discuss it.
I suspect quite a few of the differences reflect changes in the distribution, most people have got far less violent, you are far less likely to be attacked by your spouse, parent, boss, cops, teacher, customers in your local pub but 15-30 year old males linked to the drugs trade have got much violent and that explain all of the increase in violence.
Not my experience at all. I don't think random violence from family/friends/acquaintances was more (or less) common 50-100 years ago than now. Would be interested in seeing some source that you think contradicts this.
The other half of your assertion, that young criminals today are more violent than in the past does track, but also not sure if just my impression or real.
I'd be curious how spousal abuse was tracked or categorized back when it was considered more acceptable. I don't know if it's more or less common, but a man used to be able to force his wife to have sex with him and that was generally legal. Being legal, I doubt it was counted as violence in crime studies, but would definitely be on the mind of survey takers when talking about safety and violence.
I am not sure I follow all of the arguments in this post. Just to pick on one aspect: Do we really suspect that people have the same inclination to report violent crimes (in particular rape) today as compared to 50 years ago? Research seems to document that more rapes are being reported in recent years, compared to 50 years ago. Hence, any uncritical referral to general violent crime rates is just going to be...biased.
Given that people complaining about moral decline seems to happen in most time periods, should we think that moral progress/regress and moral drift have been constant or moving in tandem for that entire time? Or perhaps my presumption that complaints of moral decline are universal is wrong?
I'm not sure it's true that "people complaining about moral decline seems to happen in most time periods". Most examples people offer come from a few, widely-scattered time periods.
I think it goes in cycles; I think over the generations we improve on some things, decline on others, until the bad consequences pile up and then we go for Moral Rearmament or Great Revivals. And those needn't be religious, they can be the secular virtues being championed by the leaders of the day (whether that be Thatcher with "Victorian values", something I think she got very wrong, or current progressives and "we should pay reparations").
To quote "The Screwtape Letters" about fashions in morality, though this is from the point of view of a devil trying to divert attention away from the real problems of the time:
"The use of Fashions in thought is to distract the attention of men from their real dangers. We direct the fashionable outcry of each generation against those vices of which it is least in danger and fix its approval on the virtue nearest to that vice which we are trying to make endemic. The game is to have them running about with fire extinguishers whenever there is a flood, and all crowding to that side of the boat which is already nearly gunwale under. Thus we make it fashionable to expose the dangers of enthusiasm at the very moment when they are all really becoming worldly and lukewarm; a century later, when we are really making them all Byronic and drunk with emotion, the fashionable outcry is directed against the dangers of the mere "understanding". Cruel ages are put on their guard against Sentimentality, feckless and idle ones against Respectability, lecherous ones against Puritansm; and whenever all men are really hastening to be slaves or tyrants we make Liberalism the prime bogey."
YES - supported by my post containing: "Civilization Cycles Compared"
And to extend the thinking of your posted book quote with the same author:
"Every now and then they improve their condition a little and what we call a civilisation appears. But all civilisations [civilizations] pass away and, even while they remain, inflict peculiar sufferings of their own probably sufficient to outweigh what alleviations they may have brought to the normal pains of man. That our own civilisation has done so, no one will dispute; that it will pass away like all its predecessors is surely probable.”
--C.S. Lewis, The Problem of Pain, 1940
In summary, I'd say some things change(Whack-A Mole) but overall there's a consistency(same game) as a society/civilization morally declines(runs out of quarters), then resets(gets change for 5 dollar bill).
This reminded me of Schwitzgebel's work (http://www.faculty.ucr.edu/~eschwitz/SchwitzPapers/BehEth-140123a.htm) aiming to show that ethicists were not more moral than others. But then you look at their measures and they are things like: "Ethics books were more likely to be missing than other philosophy books: 8.5% of ethics books that were off the shelf were missing or more than one year overdue, vs. 5.7% of non-ethics books, a risk ratio of about 1.5 to 1 (66/778 vs. 52/910, CI for difference +0.3% to +5.2% , Z = 2.2, p = .03)."
To be fair, asking people "how moral are you" is not going to work very well, and propensity to crime needs a very-large sample size due to very-low base rate (especially when you control for class, since ethics professors are high-class). In 2014 there wasn't sufficient tech to straight-up mind-read people (and it's questionable whether even a full "download all memory" would achieve this, since a lot of dickery comes from *not caring* and thus isn't remembered). I suppose you could put bugs on people and examine their daily lives for a year, but then you'd get eaten by the IRB because the other people helped/harmed by those actions haven't consented to the research.
Trying to present quotes from Roman Empire-perdiod historians as evidence that moral decline is all alarmism and illusions is particularly ironic, when Rome is a canonical example that a civilization can spoil, decay, and collapse. One can almost conclude that we're dealing with New Dark Age cheerleaders.
It's from Livy which means it was over two hundred years until the Roman Empire's peace and prosperity collapsed. And about another two hundred until it actually fell.
And yet, Caligula took power just a couple of decades from then. Evidence suggests that a nation can bear plenty of ruin indeed, but not an infinite amount.
Caligula was in power for four years and everyone agrees he was sane when he was put into power. His insanity came afterward, most sources say due to an illness. His reign was also not all that bad from the average person's point of view. He expanded the empire and built a bunch of infrastructure. But he upset the army and the aristocracy so they assassinated him.
A couple of decades from then Nero took power. I'm not sure what's your overall point? Do you claim that there wasn't widespread depravity and corruption in the Roman Empire, long before it eventually declined and collapsed? That would be an interesting contrarian perspective for which I'd appreciate a source.
My point is that if you want to argue "depravity and corruption led to collapse" then you need to show that there was an uptick in these things prior to collapse. It hardly matters if there was two centuries before the collapse. If anything it's a problem for your theory since it means that an uptick in depravity and corruption didn't lead to a collapse then.
And if you're proposing some model where each generation gets worse and worse and it accumulates you need to explain Vespasian or Marcus Aurelius.
From where I sit, "morality" consists of a set of rules that Good People (TM) impose on everyone else. Women should be subordinate to men. Blacks should be subordinate to whites. Particular types of clothing should be worn, indicating one's status in the social hierarchy. Sexual activity must be done in the correct way, with the correct person. Everyone should participate in religious rituals. Etc. etc. somewhat ad nauseam.
I don't think my use of this term is especially unusual in my generation - just about everyone who felt they were should-on by the Religious Reich tended to adopt this usage. We remember the Moral Majority (TM), and what it claimed was "moral".
Perhaps some of my peers have mellowed with age. But if I'd been responding to any such survey, the results would be somewhat "through the looking glass'.
This is also my context for these researchers' ideas of "objective morality". They've picked a set of rules - perhaps less pernicious than those purveyed by the Religious Reich (you do say they appear to be left wing) - and tried to create a metric based on those rules. Unfortunately, there isn't any uncontested set of rules that all will accept, even in a single generation - as you point out for the specifics of their "objective" metric.
Perhaps any attempt to measure people's perceptions of moral trends tends to measure conformity - to what extent does everyone in the society agree on the same shoulds and oughts?
Consider two types of violators of moral rules:
- the sinner violates rules that they agree with, or at least give lip service to. They conceal their sin if possible, and repent when caught.
- the outsider violates rules that they don't agree with. They are more likely to argue that your rules are insane than to conceal their activities.
I have no non-anecdotal evidence, but I strongly suspect that outsider behaviour counts *much* more than sinner behaviour in people's perceptions of what they see as immorality. And this is particularly true when the outsiders are right there in your face, claiming equal or higher status, rather than in some benighted foreign territory.
And that's what we have in the US - two or more separate solitudes that can't help but see each others' conflicting rules and behaviour.
When more than 50% of the country doesn't think my rules should be imposed, *whichever set I believe in*, it sure looks like moral decline, at least if I believe there was a recent time (the 1950s?) when there was something a bit more like majority agreement.
I wonder whether the unity of morals and politics is a new phenomenon, and whether one could prove it to be so. Certainly in Britain the main political parties both had traditionalist (i.e. Christian) and liberal wings, which would vote with the equivalent wing from the other party on moral issues, as recently as the 1970s.
(Does a situation like this indicate that a country cares more about economics than morality? If morality was more important, one might expect to get Christian and liberal parties with left and right wings. I think this might have occurred at times on the continent.)
"I wonder whether the unity of morals and politics is a new phenomenon"
Based on cursory knowledge, I'd say that Great Schism was Western Christianity's well-intended effort to maintain theological integrity but over time became the consolidation of power. I've been reading an annotated version of the Philokalia(Orthodox companion to the Bible composed of quotes from the first 1500 years of Christianity) and there are stark difference compared to the modern versions expressed by Western Christendom (pun intended). It's far more practical and sensible - fitting much better with eastern philosophy.
If politics isn't about values, what is it about? Well, techniques for achieving them: but most people don't know much about economics and the like, so it smostly about values.
I'm inclined to think of this in terms of personal virtue, and it strikes me that there is a component to one's moral character that is independent of their moral beliefs. Let's go back to Rome again and pick some noteworthy historical figures as examples, say Cato the Younger and Sulla, and think about their moral character. The reception of Sulla ranged from a monster among his political enemies to being treated with great embarrassment even among his fellow Optimates. In contrast, while Cato has received criticism for being too uncompromising in all ages, few would dispute his moral integrity, both then and now. And that's despite the fact that he was championing the sort of values that would e.g. have husbands kill their wives if caught drinking.
I would suggest that there's two components to one's morality: the ethical theory that concerns oughts and ought-nots (and is in itself composed of purely arbitrary or at best contextual mores like what is an appropriate way to dress, and a body of more rigorously constructed morality that is not /objective/ per se, but given assumptions from evolved human psychology, such as that pleasure is better than pain, can be demonstrated by an argument, and that I would argue has grown at least a bit during the course of human history), and then one's virtue, such that a virtuous character is more capable of being their best self in embodying their ethical beliefs. For example, in one time and context a just character might kill their wife for drunkedness, while in another time a just character might fight against police brutality, but the justness of their character is timeless: we are still inclined to think Cato is a paragon of cardinal virtues even though his ethical beliefs have something reprehensible to anyone alive today, and I would like to think that the Romans too could see the virtue in our contemporary moral examplars, even if they would think their cause is beyond misguided.
And this raises a question that I think is in principle answerable although I don't know how: are contemporary people more or less virtuous than people were 10/100/1000/2000 years ago?
It sounds devilishly hard and possibly controversial to flesh out the specifics, but I tend to agree with such a view, that there is an important universal element to the complex construct that we call morality (that could be called "virtue" or "character"), and then a huge variable cultural construction on top, or even to the side.
The interesting part then would be to try to gauge if people are less moral in the intrinsic aspect than they used to be.
What I find strange about discussions on this topic, is that people tend to either propose 1) a complete stasis throughout history, or 2) a recent decline. Even before looking at any data or polls, I would tend to hypothesize a basically chaotic curve that goes up and down, with trends in either direction and at vastly different speeds at different times and locations.
Scott's comment on wealth makes a lot of sense; global wealth makes us less obviously dependent on each other's goodness, so social mores inevitably tend towards more individualism, which can be felt as a loss of morality. My first impression is that this would mostly affect the cultural aspect, i.e trying to achieve 1940s versus 2020s morality. Whether it would actually affect people's intrinsic goodness seems to me an open question.
This isn't a bad idea, but it does have some problems. Some ethical theories are easier to conform to than others. Some ethical theories are even ones that evil people have biases towards conforming to (such as ethical theories that require gossip, busybodying, or beating up or mobbing people). Note that this isn't the same as whether the beliefs of the theory are reprehensible.
People nowadays are very virtuous when it comes to ganging up on evil people on Twitter.
Not only is this article below the supposed high standards of Nature but if I recall correctly this messy study is based on Mastroianni's PhD. Social "sciences" are doomed.
You remembered correctly -- on the subject of Nature's modern sociopolitical self-immolation, I was rereading the NYT article today on the James Webb telescope naming "controversy"** and I was unhappy but unsurprised to read that Nature printed an editorial in support of the renaming campaign.
** If you're unfamiliar, it turns out an unsubstantiated rumour about James Webb being homophobic had been started and propogated by a couple prominent physics activists. The rumour was eventually debunked, but the slander operation continued unabated for a while until the issue stopped getting political oxygen.
Great post! I skipped on the "study" (delete unread), as I was sure not to learn anything new. I never skip Scott as I always learn/think sth. new. My morals should be 1970, and they are in some ways, but maybe not the whole package (church on Sunday was a thing in my family and most others/ divorce, kids-out-of-wedlock did happen rarely/ gay?/ LBTIAR+? kidding).
I do believe our "morals" in post-war Germany improved clearly: people born in my country are usu. acting much more domesticated today than they did in 1986, and from what I read, 1962 was barbaric (google: Straßenschlachten in München 1962 - street-battles weekend after weekend for no apparent reason) . The past is a foreign country. - We do have more people arriving with other Schelling-points nowadays, true. But I see some of their points moving fast.
Yes, an "illusion" because modern we have rejigged morality to mean "all the old stuff our forebears said was sin and dumb cruft like that, we now say is perfectly cromulent and normal". The same way that owning slaves was not immoral in a society where slavery is commonplace and accepted, but only becomes immoral when later generations decide "no, you neither should nor can own slaves". Morality is subjective, a standard we draw up new measures for in every generation. There never was a Golden Age of the past where everyone was perfectly moral; it is up to us to define and create new Golden Ages.
No, not an "illusion" because hell yeah there's a moral decline from the standards of the past. Owning slaves is always wrong because humans are not property, and it doesn't matter if it's a thousand years ago in the Classical world or today in Africa. We've redefined morality to mean "the things we disapprove of" and that can be racism or sexism or transphobia, but the things we like and want to do are now okay - no they're not, not if you cleave to a standard of absolute/objective morality. We flatter ourselves that we are becoming kinder, nicer, more moral by comparing present day to the worst parts of the past (slavery, colonisation) but we omit comparisons with social and civic values we find inconvenient or which would leave us in a worse light.
Fair enough, self-deception to see ourselves in a favorable light is surely a powerful bias. But the big question is to distinguish the aspects of this gross construct called 'morality' that just change over time like the weather, and those that actually constitute some kind of 'moral fiber'.
It sounds very neat to cleave to some standard of absolute or objective morality, but the present reality is that there is no society-wide agreement on such a thing. Hell, I can barely agree with myself from one day to another on such matters, let alone with fellow forum posters from the other end of the world!
People can and do own slaves, depending on what you mean by owning. Domestic and industrial workers (generally people without legal residence) can be trapped to do work.
The difference between now and then is that trapping people isn't as common and isn't respectable.
Indeed, fifty years down the line, owning pets might be seen as slavery (claiming to own non-human animals and forcing them to live under inhumane conditions, sterilising them, keeping them from the company of others of their kind, and so on).
Yup, some of the more extreme animal rights advocates' views look like a new moral panic being born...
( Of course, fifty years is long enough that the debate of 2073 might be between ASIs on whether it is ok to own human pets. I wonder if they will neuter them? )
I'm not sure about these, but I also see other possible explanations.
Could a perceived decline in morality also be related to more and more of our lives being governed by large institutions which, while consisting of moral people, are driven by other incentives, e.g. profit maximizing companies?
Another reason for perceived morality decline could stem from less direct interactions with other people and more of our view of morality being affected by news and social media where immoral actions might be more widely publicized.
So while the average person might be regarded as equally moral as before, society could still be perceived as less moral.
<i>And this is part of why I find the introductory quote by Livy so annoying. What was morality to Livy? Respecting the lares and penates. Performing the ancestral rites. Chastity until marriage, then bearing strong children (Emperor Augustus’ famous law demanded at least three). Martial valor and willingness to die pro patria. Commoners treating patricians with the respect due a noble class, and patricians treating commoners with noblesse oblige.</i>
Since the paper uses violence as a proxy for moral decline, it might be interesting to see how violent Livy's own age was. Of course, we don't have crime statistics for the ancient world, so we can't really say whether personal murders or assaults were more or less common, but political violence was certainly prevalent. Livy was born in 59 BC, and published his first books probably around 27-25 BC. Between the time of his birth and becoming a published author, therefore, Livy would have seen no fewer than *six* civil wars (Caesar's Civil War of 49-45 BC, the War of Mutina 44-43 BC, the Liberators' Civil War 44-42 BC, the Sicilian War of 42-36 BC, the Perusine War 41-40 BC, and the civil war between Octavian and Anthony 32-30 BC). His father's generation (say, the forty years before Livy's birth) would have seen another five or so (depending on whether you consider the Sertorian War as separate from Sulla's second civil war, and whether you count the Third Servile War or not). That's... really quite a large number of civil wars, especially when you consider that Rome had once been unusually politically stable compared to other Mediterranean city-states.
And it's not as if the Roman elite were all upright and virtuous between their bouts of civil warring, either. There are plenty of anecdotes from this era about the corruption and rapacity of Roman governors. Again, we don't have precise statistics, but opportunities for corruption had certainly increased (more distant, wealthy provinces --> more opportunities to extort stuff from the locals with minimal oversight from the government back in Rome), so it's at least probable that the anecdotal evidence does capture a real trend here. And of course, this rapacity didn't remain confined to the provinces, either: Sulla proscribed many citizens simply to get at their wealth, and the Second Triumvirate were notorious for seizing people's land to redistribute to their own soldiers.
On a more personal level, I remember my classical lit professor at university used to argue that there probably was more extra-marital sex in the late Republic than in previous eras. Basically, the big wars of the period (both civil and foreign) took elite men away to the provinces for sometimes years at a time, while their wives and daughters were usually left back in Rome. So you had a large group of upper-class women with both the motive to commit adultery (since the only alternative was going without any sex at all for years on end) and the means to do so (your husband can't easily keep an eye on your behaviour when he's off fighting in Gaul or Syria).
So we have a state trapped in a cycle of political violence, with corrupt and extortionate rulers, and high levels of adultery among the upper classes. That sounds like a situation where someone might reasonably say "Yep, moral standards aren't as high as they used to be."
Also... These sorts of "Here's someone in the past complaining about falling moral standards" anecdotes usually expect the reaser to fill in "...but everything worked out fine anyway, so clearly it was all just a big fuss over nothing." But let's consider what actually happened in the late Republic. On the one hand, the Roman state survived, so I guess things worked out fine in that sense. On the other hand, you had a good century or so of escalating political violence, culminating in the abandonment of Rome's traditional constitution and the imposition of a military dictatorship because that was the only way people could see of stopping the continual civil wars. I think it would be quite reasonable for someone to consider this a pretty bad outcome.
Given that this is about trends, it does seem notable that all those civil wars happened before Livy's thirties. The second half of his life was the beginning of the Pax Romana. He straddles the divide between a century of violent internal strife and two centuries of unprecedented peace.
I think a lot of people misunderstand the early Imperial writers' complaints about decadence. They aren't saying that the barbarians are going to start beating down the gates of Rome any time soon; they're saying that the Roman people have become unfit for self-government, and are therefore doomed to live under tyranny (a much more plausible claim, given what actually happened).
Both the MG paper and the post cite Livy. Funny nobody cites Ecclesiastes "Say not thou, What is the cause that the former days were better than these? for thou dost not enquire wisely concerning this".
Actually, they don't. The post cites the MG paper, the paper cites some book... I would not be surprised if the book cites another book or paper, et cetera, until after a few steps the chain breaks somehow (the N-th source does not provide a source, or does not even mention the thing it was referred for).
I tend to agree that MG are coming in with their own set of biases; if morality is not declining, then people today are as moral as people in the past.
But that also means that people in the past were as moral as people today, and we can look at things like "racial injustice" to see that is not so.
So there can only be no moral decline if people today are *more* moral than people of the past. And if it is possible to improve morally in some areas, then it must also be possible to decline morally in some areas, unless we also propose that the present is becoming more moral on *every* measure.
Also, they slip this little nugget in:
"The United States faces many well-documented problems, from climate change and terrorism to racial injustice and economic inequality—and yet, most US Americans believe their government should devote scarce resources to reversing an imaginary trend."
But "racial injustice" *is* a moral issue, and we may take it from the above that MG feel one where we are *not* more moral, that we have stayed at the same level of immorality as the past (or maybe even got worse) for this measure. And there are calls to devote resources to reversing this trend of "racial injustice".
So what is it? An imaginary trend where we don't need to devote scarce resources to reversing it in the case of "racial injustice", or it's only imaginary when it's about issues conservatives care about but for liberals/progressives, it really is a genuine example of moral decline which must be reversed?
Well, culture does have a cumulative effect. Increasing numbers of people, more interconnectedness, better communication and possibly more leisure time together have brought us from survival to luxury. If accumulation works to such an extent on knowledge, technology and the arts, why could there not be some amount of general progress in morality throughout history?
On one side we have the basic relativity of things - what we grow up with is our frame of reference for "normal", and we accept it as an implicit baseline, warts and all. On the other side, we have the basic capacity to put ourselves in another's shoes, and to notice injustice towards them, but it requires attention and a bit of distance from the old normal. So once someone has pointed out that, for example, slavery is unjustifiably wrong, it does resonate, and it's very hard to put it back in the bottle.
These days a similar battle is happening around animal rights. It remains to be seen just how far our capacity for inter-species empathy goes.
I think it's a mistake to assume that the present situation is normal. While morals may drift somewhat from generation to generation, I think the recent moral upheavals are damn near unprecedented, with views of morality on many issues completely reversing within the space of one or two generations, e.g. from the point where you could be jailed for being homosexual to the point where you can be jailed for saying that homosexuality is wrong. This type of thing just doesn't happen all that often.
Our hypothetical person born in the 1940 who complains about the moral decline he's seen is well aware of the metamoral nature of his complaint -- he understands it's not just that the world has got worse according to his moral rules, but that the world has replaced his moral rules with a whole new set of moral rules. Of course the new set of rules is better by the standard of the new rules and worse by the standard of the old rules.
I wonder whether the Protestant Reformation felt like a similar upheaval, with (in some places) being a loyal Catholic went from being the default to being very wrong.
Depends where you're at. In the UK, you might at least get arrested:
"On 2 September 2006, Stephen Green was arrested in Cardiff for distributing pamphlets which called sexual activity between members of the same sex a sin. On 28 September 2006, the Crown advised Cardiff Magistrates Court that it would not proceed with the prosecution.[19][20] [...]
On 20 April 2010, police arrested Dale McAlpine, a Christian preacher, of Workington in Cumbria, for saying that homosexual conduct was a sin. On 14 May 2010, the Crown decided not to prosecute McAlpine.[26] Later still the police apologised to McAlpine for arresting him at all, and paid him several thousand pounds compensation.[27]"
True, police do sometimes arrest people without legal basis. Likewise in Canada it's technically legal for women to go topless but I wouldn't be confident the police won't arrest. So fine, I amend my comment to "you cannot be jailed legally".
Okay, because I'm not working for the rest of the day (finished the tasks that needed doing) but I'm up and online so I might as well be doing something, let's have a gander at some of the questions.
(1) "“Is there any area near where you live -- that is, within a mile -- where you would be afraid to walk alone at night?”
Depends. There were "bad" parts of town within a mile that have since improved vastly, and the new "bad" parts are a bit more than a mile away. I absolutely wouldn't be walking alone at night on the weekends when the pubs let out, because a bunch of drunk aggressive idiots getting into fights and petty vandalism aren't too discriminating about not picking on innocent passers-by. But quite likely, 'twas ever thus everywhere.
(2) "They mention in the text that these kinds of questions did better than others; 50% report improved treatment of gay people. But what are the other 50% thinking??! The answer has to be something like “2002 to 2013 is too short a time to measure even extremely large effects that were centered around exactly this period”. But then what does that mean for the rest of their data?"
Re: the other 50% who don't think treatment of gay people improved in 2013, I Googled "gay marriage 2013" and found that was the year the Defence of Marriage Act was struck down by the Supreme Court. So if you're pro-gay rights in 2013, you might well feel "We had to go to the frickin' Supreme Court to get our natural rights because the knuckledraggers passed laws to deprive us of them and *still* state bans on gay marriage are not ruled unconstitutional, improved treatment my sparkly kinky Pride ass":
(3) "Generally speaking, would you say most people can be trusted or that you can't be too careful in dealing with people?"
Oh, gosh. My native tendencies make Eeyore look like Pollyanna, so I'm heavily in the "Trust but Verify" camp. This has as much to do with "are you an optimist or a pessimist?" as it does with perceptions of moral decline. And when incentives to be backstabbers in order to get ahead are more plentiful and more encouraged ("only losers take the bus") and there is more social atomisation, more fragmentation, less cohesion, the perception of work and the loyalty owed between employer and employee has changed, 'greed is good' and so forth, when you are interacting with more strangers and there are a lot more scammers out there, then yeah, I think it likely that people will perceive "you can't be too careful".
About your point number 1: that mostly matches where I live. There are areas within a mile that I'd be afraid to walk alone at night (or in the day). My own neighborhood feels safe (or safer). But those "bad" areas have probably improved over the last 20 years or so. But then, there's a hipsterish neighborhood I live close to. It apparently was a horrible place in the 1980s and early 1990s, became a gentrified and therefore "safer" place in the late 1990s through mid 2010s, and now is starting to get the reputation of being more dangerous.
Edited to add: And the reason this is relevant is, if one lives in a very large city (as I do), one almost always lives within about a mile from a dangerous at night area, regardless of whether "morality" (however defined) is improving. Of course, the actual level of danger in the "dangerous" neighborhoods may change. I'd probably feel safer in the hipsterish neighborhood at 2am now than I would have in the other neighborhoods at morning rush hour (which for some reason "feels" the safest time to be their) 20 years ago.
I've always thought poll question (3) is a false dichotomy. Yes, most people can be trusted. And yes, you can't be too careful, because the small minority who shouldn't be trusted can do a lot of damage if you do trust them.
That's a good point! There was an earlier discussion about locking one's doors. I lock my doors when I leave the house, not because I expect the average person to steal anything - but all it takes is for 1% to be systematic thieves to justify the precaution.
The first question really should be changed to emphasize human threats. I'd be terrified of walking into the swamp that is a quarter of a mile away from my house at night because I might step on a poisonous snake or a gator. That has no bearing on the morality of my neighbors. But of course, the authors didn't write the questions, it just an old one that kind of works.
To be fair though, the degree of "trust but verify" varies greatly between societies. For example, I was born in the glorious USSR, but since then have been assimilated into your Western capitalist pig-dog culture of global imperialism (tm). So, the other day, I was eating at a local cafe, noticed that I ran out of sugar for my coffee, then got up and got more sugar from the tray across the room -- leaving my cellphone unattended at the table in the process.
When I realized this, I felt a brief spike of terror, because in the glorious USSR my cellphone would absolutely be gone by the time I got back to the table. If I went to the police and tried to file a report, they'd look at me like I was crazy: "you left your stuff and now it's gone, it's your own damn fault, now stop wasting our time". Yet here in America, in the area where I live, my cellphone was relatively safe. In fact, if someone tried to take it, I bet other bystanders (or the staff) might even try to stop him.
However, note that both Soviets and Americans would see what happened as perfectly moral (or at worst morally neutral). And there are places in the world today where even flashing an expensive cellphone on the street will get you stabbed (or at least robbed); and the denizens of those places consider *that* to be a morally neutral outcome, as well. It all depends on your perception of normality.
There is probably an element of illusion in the perception of moral decline in that people get more concerned about these things as they get older and have children and eventually become more dependent. Just as there are always older people complaining about how young people don’t know how to speak properly.
But that doesn’t mean there can’t actually be moral decline.
My guess (not having looked in detail) is that the measurement graph is not considered much evidence in favour of anything because, frankly, it's a noisy mess. Sure, the average of those points is dropping, but those points are so all over the place that any change would need to be either super drastic or super sudden to be significant for this sort of analysis... I guess
Also, Scott is alarmed that the average *perception* of trustworthiness decreased, but doesn't that kind of match the paper's thesis? Scott seems to assume the perceived decline reflects a real decline in trustworthiness.
Really good take on the recent lack of moral decline paper. It would be great to see a detailed response. When reading the original by @Adam Mastroianni a quote from the British historian Dominic Sandbrook came to mind: ‘there are moments in history when disputes about history, identity, symbols, images and so on loom very large. Think about so much of 17th-century politics, for example, when people would die over the wording of a prayer book.’ Those people would think we were immoral beyond belief … https://www.theguardian.com/world/2021/jun/13/everything-you-wanted-to-know-about-the-culture-wars-but-were-afraid-to-ask
Hypothesis: People think the world is getting worse because, if they grew up in an environment which was reasonably trustworthy *to* *them*, they will think morality is in pretty good shape. Later, they will experience various defections, and think that things are getting worse.
Also (and thanks for underlining changes in morality) people when they're young may be more willing to believe that standards of public morality are being upheld and acquire a more accurate, cynical view with time.
Possibility for recent decades: computers have made defection much easier. I remember when you could pick up a phone, and you would almost certainly be hearing from a person who wanted to talk with you personally, not an advertisement or a fraud or an automated system doing who knows what who just hangs up. People may have been just as willing to do phone scams in the sixties, but the opportunity wasn't there. A steady drizzle of small defections sounds like it isn't being caught by those studies.
I came up with "nostalgia is fond memories of when your knees didn't hurt". Is it possible people think the world is getting worse because their health is getting worse?
>But when people say “there was more morality back in the good old days”, they rarely mean “in 2000 compared to 2015”. Even if moral decline were constant and linear, 2000 - 2015 might be too short a period for ordinary people to notice the difference.
I've seen multiple commenters who say that gay rights are fine, but that trans rights are a bridge too far and an example of moral degeneracy. If we take them at face value, then they really are yearning for a return to the good old days of the 2000s.
No, it's not just people saying "I support trans people but we can't allow children to transition" (or some other specific thing that they're against), you also can find people saying things like "LGB people would get lots of support from the right if they just dropped the TQ+ part" or "we were fine with gay marriage when we were promised that gay couples would be nice normal people like Pete Buttigieg, but now this trans stuff has gotten out of hand and we're suspicious of the whole movement."
That's the point -- "trans rights" meaning "transfolx get the same rights as everyone else" is different than "transfolx get additional benefits/options in order to make their life better." It's entirely reasonable to believe that free gender affirming care is not something that the government should provide (since you know, most people don't get free vision-correcting care etc.) without being tarred as being a transphobe.
I guess this idea that everyone adopts the zeitgest of their formative years and sees all changes to that as moral decline is actually a pretty good explanation. Empirically I believe it to be true. But there's a follow-up question - *why* do we do this? Why are we so morally inflexible? Why, when I grow old, will I be unable to accept whatever the cool kids believe?
You could call it moral maturation instead of moral inflexibility instead. And what you believe then, will be a firmer, more coherent, better calibrated and informed spin on your current beliefs (with some of them probably tossed out wholesale).
What the cool kids believe is relevant for the other kids. But what are silly children's games to a mature, experienced adult?
But I don't think that's the model of belief-forming that Scott Alexander is proposing. Our beliefs don't mature as we grow older, they stay largely similar. The generation below also does the same thing - but with a different set of beliefs. The reason for generational difference is not down to differing levels of maturity, it's down to having grown up in different eras, or in different cultural context. We're not in different places because we've moved further than them from the same initial spot; rather, we started in different places. It's a cohort effect not an age effect.
One other thing I think you can do to help make sense of things is swap "social cohesion" for morality.
I think that the way social cohesion tends to work across much of history is there are brief periods where cohesion increases dramatically, followed by long periods of slow decline. The slow decline of social cohesion is often an *increase* in living standards, because folks are making fewer sacrifices and focusing more on their own well-being.
If this toy model reflects reality, it also mechanically explains the way that observers throughout history keep noting apparent moral decline. Every generation except a few rare exceptions live and die within those long periods of slow social decay.
Interesting. By "there are brief periods where cohesion increases dramatically" do you mean crises when everyone has to "pull together" to survive (e.g. London during the Blitz)? Or moral and religious revivals? Or (in a much darker sense) times when the morally-nonconformist are killed off en masse?
To elaborate the toy model, "normal" conditions lead to slow decline of social cohesion. Extraordinary conditions create high-variance moments. Sometimes they're really bad and you have a catastrophic collapse, other times they're really good and everyone pulls together.
Cases of collapse create an environment in which new modes of social organization can compete and whichever is best at generating social cohesion tends to win, thus leading to to another high point from which the slow process of decline can begin.
This is obviously sort of like the "hard times lead to strong men, strong men lead to good times, good times lead to weak men, weak men lead to hard times" meme. But it suggests it's more of a punctuated system than an even cycle. Crisis moments cause a lot of creative destruction and rewrite the rulebook, then future generations coast until conditions force them to adapt to something new.
To top it all off, we could also add that not all high cohesion societies are equally moral. Some high cohesion societies might be very moral, and their decline is regrettable. But if the prevailing society is oppressive or otherwise bad, social cohesion may be a force for evil, holding together something that ought be allowed to perish.
Lol, I had forgotten about the difference in rotary phone numbers !
Also, reminded me of something that happened recently : I hadn't used my debit card in the PIN mode for so long (compared to the very short range radio hold mode), that when I actually went to to the bank to put cash in, I realized that I had forgotten it !
(Thankfully, still did, during the trip to the nearby store.)
"There is less homicide today then in 1900" (do you mean 1960?) (note 2). In any case, when comparing homicide rates across time, you must control for the fact that medical care has greatly improved across time. A patient with a stab or gunshot wound that would be lethal in earlier times, is now much more often saved on the operation table. Hence fewer homicides, and more (only) violent assault.
Yup. Anyone have some estimates on the numerical size of the effect? Ideally one would want to send 100 NIST-standard reference material shooting victims to an ensemble of ERs every few years, and track the trend in survival rate, but this would be hard to arrange... :-)
This study fits well in the category of "human history started in 1965 and everything before that was basically cavemen and cartoonishly villainous robber barons twirling their moustaches."
The natural comparison would be to look at periods where people claimed moral decline vs moral strength, especially but not exclusively contemporaneously. These periods do in fact exist and the idea that everyone at all times has thought we are in moral decline is something of a myth. If your contemporary morality theory is right you should expect younger people to generally to think we are in a morally strong period. But I don't feel (I know, I know) that's the case today, for example. That the average 20 year old feels America's an especially moral society.
You have several examples even within the last century: The genteel morality of the Edwardian period vs the 1914-1929 excesses. The hardscrabble self-sacrifice of the 1930s to 1950s vs the free love, radicalism, and drugs of the 1960 to the 1970s. And so on. I'd start there to tease out the differences. (Also note, contrary to some political claims, those are not all especially conservative periods.)
Thanks, upon reading this (especially the reasons driving them) looks like the paper might be less vague and worthless than I assumed, though it's still not great...
This may have to do with the definition of ‘morality’ more than anything else. At least some respondents will include sexual promiscuity and open homosexuality among the immoral behaviors that are increasing, together perhaps with tattoos and obesity. None of these have anything to do with being mugged or not being able to trust others with money, but they are all frowned upon by traditional religions.
> 50% report improved treatment of gay people. But what are the other 50% thinking??!
With the caveat that this is "epistemic standard: stab in the dark", there are certainly "progressive" spaces nowadays where it's forbidden to mention that treatment of minorities has got better, because that would be like claiming that things are maybe not so bad, which would call into question the whole need to burn society to the ground.
There is, of course, a kernel of truth to this idea - homophobia has not completely gone away, and there's still places and jobs and families where being gay means you're going to have a very bad time. I don't disagree that there's still lots of work to be done. But I also think it's both correct and relevant to mention that being gay went from illegal to legal, or that a gay person working in a democratic-leaning large corporation will have a much better workplace environment than a generation ago, at least as far as their orientation is concerned.
Worse than that, they'll tell you treatment of minorities has gotten worse because it's less explicit. I've interacted with people who will tell you without the slightest hint of insincerity that the US is a more racist place today than it was in 1955, and the fact that you see so much less overt day to day racism is actively treated as evidence of that claim.
Absolutely. And this very day, June 30 2023, we have an excellent example: *because* we live in a society where government power is being used to compel people to proclaim their approval of gay marriage, and SCOTUS said, "how about let's not", therefore the masses are essentially being told that Clarence Thomas has personally killed and eaten a dozen gay babies on the steps of 1 First Street. The first piece of evidence, the background situation where government power is being wielded this way, is just the moral arc of the universe: it has nothing to do with treatment of gay people, it's just the way things are. The latter, the single step back by SCOTUS, is evidence of supreme backsliding from this high ground; of worsening treatment. So *of course* people who have any ounce of trust in the media think that discrimination has gotten worse. It is inconceivable that they could think otherwise when the evidence is put into the boxes it is put in. There's no way a study repeated today would find anywhere near as high a number as 50% reporting improved treatment. But what's truly baffling is that Scott doesn't understand why. What is he thinking??!
"Compared to the past, have things gotten better, worse or stayed the same [regarding] treating African-Americans with respect and courtesy? (2002 vs. 2013)"
Uhm... what if a questioned person does not actually respect the hispanics, gays and/or the African Americans himself? If their mistreatment is framed as a problem, that can get better or worse, presumably depending on social attitudes, therefore some of the respondents must dislike them. Or it would be an unrepresentative poll and therefore worthless. But if a gay-disliking person notices that gays are treated with (from his perspective) undue respect and courtesy, he would have to answer, that the situation got worse. How are those questions not garbage in, garbage out? [maybe I miss some context?]
It's one thing to define moral progress being equivalent with nicer/kinder treatment. But I do not see, how it can even measure by that definition, given that the respondents would not necessarily share that assumption.
Eh... I think the authors did not actually use that kind of question? He states that "For now, we only care about the parts of morality where pretty much everyone would agree." in his "A note for the Pedants" (thank your for thinking of me!).
I had been assuming that this general study was accurate and fit with my perception of how people experience negativity bias, however I did think of an alternate explanation apart from a critique of the study itself
At some point I switched from a "things are declining" conservative to a "many things are great and getting better and it will all work out" also conservative. But consider the implications if it was a very common belief that things will work out, especially that they will pretty much naturally work out. A society without at least a strong component of people who believed this might be pretty guaranteed to go ahead and experience moral decline
It might be like a driver on a road trip who notices they are 3/4's of the way to the destination, so clearly the belief that taking your foot off the gas will reduce your momentum has not borne out. In this sense negativity bias and accurate assessments of actual moral decline may both exist as a social and personal form of keeping your foot on the gas
"Whig history (or Whig historiography) is an approach to historiography that presents history as a journey from an oppressive and benighted past to a "glorious present". The present described is generally one with modern forms of liberal democracy and constitutional monarchy: it was originally a satirical term for the patriotic grand narratives praising Britain's adoption of constitutional monarchy and the historical development of the Westminster system."
Online copy of the book, which is short enough to read quickly:
"The danger in any survey of the past is lest we argue in a circle and impute lessons to history which history has never taught and historical research has never discovered — lessons which are really inferences from the particular organisation that we have given to our knowledge. We may believe in some doctrine of evolution or some idea of progress and we may use this in our interpretation of the history of centuries; but what our history contributes is not evolution but rather the realisation of how crooked and perverse the ways of progress are, with what wilfulness and waste it twists and turns, and takes anything but the straight track to its goal, and how often it seems to go astray, and to be deflected by any conjuncture, to return to us - if it does return - by a backdoor. We may believe in some providence that guides the destiny of men and we may if we like read this into our history; but what our history brings to us is not proof of providence but rather the realisation of how mysterious are its ways, how strange its caprices — the knowledge that this providence uses any means to get to its end and works often at cross-purposes with itself and is curiously wayward.
…Instead of seeing the modern world emerge as the victory of the children of light over the children of darkness in any generation, it is at least better to see it emerge as the result of a clash of wills, a result which often neither party wanted or even dreamed of, a result which indeed in some cases both parties would equally have hated, but a result for the achievement of which the existence of both and the clash of both were necessary."
"I think (not at all sure!) that this means “the year of the survey explained only 0.6% of the variance in responses”. That sounds tiny. But looking at the graph, the effect looks big. I would file this under “talking about percent variance explained is a known way to make effects sound small”, although I’m not sure about this and I welcome criticism from someone more statistically-literate."
The slope is striking but what's not shown is the literally thousands of data points to show how poorly that slope fits the data generally (aka low variance explained). You can already tell that birth cohort is a pretty big deal, perhaps age of respondent even. Both of these have something to do with "years" but are not the year of response variable. Another source is plain old-fashioned between-participant variability - people within a cohort are more similar to each other than people between cohorts, but even within a cohort, some people are trusting and others aren't. Year of survey response isn't the key factor in explaining variance in trust, other things are.
Hot take without having read the study or even this article yet: people love to quote writers from antiquity writing about moral decline as though that proves that moral decline isn't real. But of course: all of the societies from classical antiquity *really did collapse* centuries ago! And from what little I know about the fall of Rome, it really did have something to do with Rome's inability to produce younger generations with the tenacity and vigor of the earlier ones.
Civilizations have been worrying about moral decline forever, and civilizations have also been collapsing catastrophically forever. It is not obvious to be that these are unrelated.
I'm pleased that Scott's third point matches my comment on their substack at the time, though I used killing one's son as the example rather than one's wife.
Rereading my comment I see that I was agreeing and amplifying (though not maliciously -- I really like Adam and his substack) and strongly implied that the changes in morality from the past to the present were in line with the shift noted in Roman days. That is, people are becoming more accepting and compassionate, but this is a disaster from the perspective of traditional morality -- if we don't execute drunken or adulterous women (including divorcees) and beat children constantly for minor infractions, and put gays in the bog, and hang race-mixers from trees, how on earth are we going to maintain the /mores/ of eras where good upstanding people approved strongly of these measures? It's a chicken and egg problem.
I agree that it's a useful filter but I'd claim it's still chicken/egg. All the conservative types that I know feel panicked about the world not because of crime or street defecation (both in places they already avoid and don't care about) or the economy (which they often appear to think is doing better than I think it is) but precisely because of the moral decline. If they weren't conservative already they wouldn't think they were just clinging to survival. What Scott says elsewhere about how tribes have a right to exist (I think in the "Archipelago" original?) might explain that a bit, but the solution is still to leave a threatened conservative tribe for lesbiantopia or whatever. And if someone doesn't want to do so, we get into all kinds of Bulverism -- conservatives want a strong society to whip them into shape because their internalized toxic masculinity makes them feel flabby and weak, or they're afraid of self-reflection because they just need authority, etc. I'm not saying Scott implies these, they're just examples of how it's a chicken/egg problem -- they wouldn't need X morality if they weren't in group X, but they wouldn't be in group X without it.
Yes, to be more accurate, it's the best argument against naive moral anti-realism.
Nothing has any right to exist, that's a liberal fantasy. Existence is clawed out of indifferent universe by any means necessary (which also includes liberal ones, when appropriate). It's just natural selection one level higher. There's not a universally correct morality, but there's a meta-algorithm which describes the correct morality for any actual material circumstances. Which is not to say that the Archipelago is a bad idea, having more diverse "laboratories" would help the meta-algorithm converge quicker.
Here though you're using "conservative" as some super-specific (USA?) ethnicity, while actual conservatives are about being very careful about breaking things, for instance conserving liberal values like free speech and abortion rights (probably for the last one, it's kind of on the fence since it's "only" been half a century).
I thought you'd cited Adam Mastroianni before on ACX? I associate Experimental History vaguely with Erik Hoel's The Intrinsic Perspective, or Slime Mold Time Mold, or Infovores, all of which I think have also been discussed here.
Adam Mastroianni (one of the co-authors) is actually a reader of your Substack. He won an Honorable Mention in the Book Review Contest for Memories Of My Life, which you also reference here: https://astralcodexten.substack.com/p/galton-ehrlich-buck
"The r^2 statistic of the graph above is listed as '-0.006'." No, the r^2 statistic (which is typically a positive number) is given as 0.008 (on page 56 of the Supplement), and the square root of 0.002 is about 9%. This does not affect your point.
There is a statistic that I find very interesting, not for predicting any decrease of morality (I agree with Scott, morality is clearly not something constant neither in time, places, or social niche. So speaking of change of morality is almost meaningless without much more context) but for predicting social unrest or stability: trust in government. 75->20% is big enough to mean trouble ahead. Especially together with overall trust level (between individuals) going down. My personal impression is that social contract is getting very thin in the western world, and those statistics confirm this impression...
Im afraid my default assumption about social-science research these days is that it is not about discovering truth, it is about establishing truth. Critical Theory Uber Alles.
I think it's useful to imagine the results of a poll that asked, "Do you think your own morality is declining/increasing/staying the same?" I feel like I'm becoming a better person over time, and I suspect that most people would say the same (about themselves, not about me). Yet I would have an extremely hard time coming up with even imaginary statistics to support that perception.
If there are not perceptual biases associated with "morality over time" questions, then people would generally have to be becoming more moral individually while at the same time becoming less moral collectively. That's not impossible; one way to pull that off would be if babies of this generation emerge from the birth canal less moral than babies of previous generations. Seems doubtful.
Another way would be a change in the rate of improvement of personal morality over time: people now improve their morality less rapidly on average now than in the good old days. But that would show up in the responses to the self-improvement question. Also doubtful.
What if people are evaluating the change in morality over time by taking a heuristic shortcut and comparing the morality of old people vs young people? If so, improving personal morality requires old people to be more moral than young people, and the perceptions become consistent.
Here's what I think is actually going on (well, this plus a lot of other stuff): people evaluate society's morality in general by comparing it to their own moral code, which they perceive as superior. The young attribute this difference to societal decay, since what they learn about the past is biased toward noble people doing noble things. The old regard their moral code as the standard, and young people abandoning it represents moral decay in general. The latter is Scott's "moral drift", which is exacerbated by increasing exposure to outgroups.
Your hypothetical poll is interesting, and I think you're right that few people would say their own morality is declining. To me, the distinction between individual morality and collective morality comes from people judging their own morality by the behaviours they would like to exhibit, while judging society's morality by the behaviours they observe. With that in mind, a moral decline in my society is summarized by thoughts like "If I lived 20/30 years ago I would be able to act in the way I think is moral without being punished for it, but since I live today I have to act immorally to avoid harm". There's some cognitive dissonance involved but I don't think it necessarily means that the young and the old have a different moral code, young people could have a similar feeling.
The remaining question is how you get a decline in collective morality without a decline in individual morality, but I do think we observe these sorts of situations in other contexts. For example, take the current publish-or-perish structure of academia. Most (if not all) people involved think it's a bad setup, we didn't always have this kind of system in place, and I think if you polled them, most academics would prefer a different kind of setup and they wouldn't personally say they're in favour of the current norm. Yet, despite all this the system persists.
I disagree, I feel like I'm becoming less moral over time (I'm in my early 40s).
I feel like it's easy to be moral when you're eighteen and you don't have any real responsibilities or needs. "I'm going to be morally pure knight Templar and I will never do X, Y or Z". But as you get older you need to compromise more in order to live as an ordinary person in the world, and it turns out that trying to maintain those moral standards makes it difficult to interact normally with other people.
Outside the debate over what real moral decline has occurred or what that means, I do think that their theory on how people construct their mental model of moral decline based on memory and attention is the interesting part. One thing that struck me from the paper is that it seems you can get a rough “perceived decline per year” that is fairly linear and observable among people at any adult age from questions like “is such and such moral thing better or worse now compared to X years ago”. I don’t think there’s much of any metric that would tell you actual morality has declined in a linear fashion. Crime rates went way up and then back down in your graph and I’d be surprised if any other objectively measurable thing we want to use as a proxy for morality does show a strictly linear change. And it doesn’t seem plausible that morality shift from when someone was born to the present has any sort of constant rate of change either; I would be skeptical of someone who said the shifts in morality from 1960-1970 were the same in magnitude as the shifts from 1990-2000 for example. Your thermostat point is well taken, but the only thing we need to doubt is that people’s apparent linear perception of decline corresponds to reality. And there I do think something like the author’s model of memory and attention is valuable. It proves much less than they want, and maybe “people’s intuitive evaluations of the past relative to the present are based on error-prone psychological processes, which psychologists have said for years, and here’s another area where that seems true” is not exactly headline-worthy, but it does seem valuable.
Since this touches upon my area of expertise, a detail on Livy: The paper that Scott quotes, while very learned, perhaps goes to far on the issue of husbands having the right to kill women who drank. Augustus, Livy's contemporary, actually removed the right of husbands to kill their unfaithful wives altogether. Ancient patriarchal customs like the ius osculi – allowing the male relatives of a married woman to check her breath to see if she had violated a ban against drinking alcohol, which doesn't mean it give them the right to kill her – almost certainly were obsolete early in the republican period. Indeed, they were mocked mere decades after Livy and Augustus, for example by Agrippina the Younger when she asked her then-husband, emperor Claudius, to kiss her as a drinking test.
I don't recall a single example of a woman killed for drinking in Roman history, and tons of recorded examples of drunk women.
I wonder if there's something like, "virtues" in the virtue ethics sense tend to decline, because which ones are relevant shift and the older generation are used to thinking of the virtues they follow as inherent morals. And new morals occur but don't feel like virtues at first.
Or the balance shifts towards consequentialism.
Or the pendulum swings back and forth and sometimes we have new moral movements (let's be vegetarian, let's abolish slavery, let's fund international charities), and sometimes we get "we used to do this and now no-one bothers"
Proof by exhaustion is a very good way to put it. Follows the common psych survey paper trick of throwing a ton of cheap, small MTurk samples at the problem. Can't conclude anything meaningful from any of those studies. Possibly the 700 person one from Prolific is ok; not familiar with them. They make no effort to correct or de-bias anything, because you can't when you're running MTurk studies with 100-300 people each.
Explicitly asking people what they think morality was like over the course of 20 year increments before and after they were born is ludicrous. I have no clue how I would answer those questions, and I doubt anyone else really does either. It's a noisy, unnecessarily complex way to get at the general concept of 'subjectively, do you think things used to be morally better or worse than they are now'. And injecting necessary noise is fundamental to their results.
Worse, they then compound that problem by making a derived indicator based on that noisy, overly-complex set of questions and age ((moral decline in 2020 - moral decline at year of birth) / age), on a sample of 347 from a low-quality source (study 2c). *Then* they run a regression on that indicator. The relevant result is that age doesn't impact perception of moral decline. Aka they don't find a significant result. Because the test is comically under-powered and because the explanatory variable they care about is also in the target variable. Proof by obfuscation is another way to put this. They throw around a ton of unnecessary jargon to hide how simplistic what they're doing actually is.
Study 4 (the big one with archival surveys) is a mess, if I'm reading it correctly. Presumably they have all the individual survey responses, so they're regressing the various year indicators on the outcome. But that doesn't really make sense. They care about individual changes, but there aren't repeat responses. What they do is conceptually identical to regressing on the overall survey average, but with a lot of noise injected. And that explains why they can't see an effect in R2 even when you can see one in mean differences: year is a discrete variable with like 8-12 values at most for all of these studies. The variability within years is nearly always going to be far greater than the variability across years.
Even if you have a clear mean difference across years, if the variance within years is sizable and varies across years significantly (which are always true for survey time-series), or if the trend in the average response isn't strong and linear (which it rarely is for surveys), then you won't see an effect in R2. You will see it in the coefficients for the individual years. Which apparently they also didn't test because they just report on one year coefficient, so they probably used year as a generic trend indicator rather than a factor (testing for 'bigger number make survey change' VS what's the effect of year1, 2, etc.).
What would make more sense (especially because it's what they do earlier in the paper) would be testing bias-corrected mean differences of average survey results for every year-pair for every series. But they'd find effects if they did that and it wouldn't tell the story they want.
This is not good work. But it's a topic people like to speculate about and Dan Gilbert is famous, so good enough for Nature I guess. The short version of all this is that they wanted negative findings, so they picked under-powered samples and used noisy, complex metrics. It's like reverse p-hacking.
I suspect that they find that the year only explains 0.3% of the variance because they use additional predictors, such as birth year. So there is a decline across birth years, but within each cohort the difference is basically 0.
I suspect a trend towards "morality is harm reduction" not because of any Haidt principles but simply because we live in an age where we prefer observable performance measures. I mean folks (including me!) will literally argue online about whether a studio produced movie is "objectively bad," when it's clearly a matter of what you value in media. I wonder if morality is the same - it's values-based, but our age is one where unsupported value judgments are passe, so we have to be able to say "see, this value is important because it reduced this specific thing we all agree is a harm."
While I'm mostly this way too, there are tradeoffs in insisting on this standard like there are tradeoffs in everything. For instance, I'm likely to follow any safety instructions/regulations with my daughter, because in any individual case there's a clear harm for ignoring them vs. a much lower cost for following them. But it's entirely possible that in the aggregate this will teach her that safety is an overriding concern and risk-taking is to be avoided. This could create something like moral decline if spread across a generation - it would be extremely difficult to measure in an unbiased way, but still lead to generally worse QoL.
Note also that I'm still defining "moral decline" as "leading to a generally worse QoL." If I made this kind of argument to religious acquaintances they would say "morality is defined by doing what God wants." They might (depending on how infected with modern sensibility they are) argue that doing what God wants leads necessarily to higher QoL but even if it didn't they wouldn't change their mind about what is moral and what is not.
I expect a Pew poll conducted in 850AD would also have seen agreement with the moral decline hypothesis, just as it would with a 'kids today' question. The proper determinant that changes is 'morals' - which certainly change over time. Having said that, like the respondents in 850AD and those in 2023AD, I do think people pay less attention to moral questions, or are more willing to think they shall get away with transgressions. I suppose I would need a news aggregator from 850AD to see how the mass murders and child abuse cases might compare with today's. I suspect such a non-existent device would prove me to be talking out of my arse! Pinker would suggest so.
Scott, I'm going to unscrupulously pirate this post, edit it, and assign it to my junior high classes as supplementary reading next year. I have to teach the "decline bias" in critical thinking. It's always felt weird but I didn't have time to look up anything. You've now articulated everything I had vague misgivings about.
On the other hand, there's Qoheleth. "Say not thou, What is the cause that the former days were better than these? for thou dost not enquire wisely concerning this."
In section III, I can see a potential case for net zero moral change over time. Say humanity is in a blind-men-and-the-elephant scenario. Each generation is partly right about something and really wrong about something else. Each younger generation figures out what the older ones were Damningly Wrong About. And each older generation sees that the younger ones are Damningly Lax About Something Else. But since the older generation always writes the books and editorials, everyone gets the impression things are declining, and no one notices it's net zero.
Anyway, that's the best I can do for MG. Back to editing your post for junior highers...
This is the Bayesian version of a significance test.
ROPE = region of practical equivalence, in other words a small area around some parameter value which are considered "close." HDI = highest density interval, in other words the measured value of that parameter, but because it's a distribution, they look at a range of values. In particular, they look at the X% of the distribution which gets the most probability into the smallest range; according to https://easystats.github.io/bayestestR/articles/region_of_practical_equivalence.html, X=89 is standard. What it actually means is "there is a high probability that the measured parameter value is very similar to the hypothesized one" which in this case I'm guessing is 0.
Yes yes: X% is the percentage for the interval to be considered for decision rule. But at least as important is the definition for the "closeness" ie the width practically equivalent to 0 region around 0. How wide is it? Very context dependent.
I think ROPE is suffering same fate as p-values in NHST, in that nobody properly understands it. The key definition of what is considered "practical equivalence" is buried as a side note.
The linked article claims that +/- 0.1 standard deviations is standard for defining ROPE, but I don't know where this comes from or if it's actually common.
"Is this because their methods are too weak to notice not just the improvement in gay rights over the early 2000s, but the improvement in African-Americans’ condition since the 1950s?"
I know this is snarky, but it is almost crime-think to notice that the treatment of African-Americans has improved. All the disparities that existed in the 1950s still exist, with the percentage disparity not that much different. Since poor treatment of African-Americans by American society is the only socially acceptable explanation, the only logical conclusion is that treatment must still be very bad.
"The r^2 statistic of the graph above is listed as -0.006."
r^2 shouldn't be negative. It looks like the coefficient listed is -0.006 and the r^2 is .008. But this is still strange; the r^2 should be much higher. Even with random data at these sample sizes r^2 should be higher than the values listed for the various surveys in Table S3. Does anyone have an idea what's going on? The numbers are so strange that I almost suspect this may be a data entry error?
I think morality rapidly declined when corona hit and several tyrannical impulses kicked in; then some 5% of the population grew a backbone and its in decline again; I think we are better then the romans, but not so sure about the 90s.
So I did what I do with all papers now, I go check to see if the data and code is publicly available, and it is, so bravo on that, but...um, this is just code vomit, for lack of a better term. Like, look at this (https://osf.io/tv5jr), it's ~100 random files created over three years, everything from R code to csv to user/...../source/prop, whatever that is, to powerpoints, and there's an R file which I think just makes one image, "data and code_upload/figure2_code.R", which is just...
Do, do people actually work like this? I mean, I've scanned through this paper, it's not that complicated, I've seen entire data pipelines without this many files. Again, people have different workflows but...but this is really outside anything I've ever seen.
And I'm torn because, on the one hand, I think publicly posting your data and code is one of the most important things academics can do and I want to applaud it here but, also, the entire reason to post your data and code is so other people can double check it and...you can't do this here. I'm not reading all these file and trying to figure out this nightmare workflow of what should be, honestly, like a dozen csvs and a single rmd file.
And I can't tell if this is, like, just a horrible workflow or malicious compliance or what but, like, this is as close to posting your code without posting your code that I can imagine. Or am I missing something?
Maybe one of these days a prestigious journal like Nature will start imposing peer review for the actual methods, the statistical analysis code, not just the written paragraphs to describe it in the main text? One can hope.
Maybe we should have a new kind of journal: Instead / in addition to grants that give researchers money to do research, they would grant the funds on the condition that part of the funding, earmarked aside, is used solely to pay for the statistical review (including programming code review) by an independent person (maybe assigned by a reputable publication organization that may look like a journal).
Here’s my quick take on this. For context, I’m an academic in the physical sciences. Pretty much all of my papers are done in collaboration with *students*. They are doing this project/writing this paper as part of their graduate training/education. I personally am very picky about how they organize data and code, and they are still terrible at it. I’ve concluded that they have to do it poorly to learn how to do it well. It is very annoying to me as I have to deal with stuff like you describe every day. It thus takes a lot of work to organize student’s data. It takes even more work to make it useful for external consumption. And researchers have little incentive to make it so.
Yeah, despite my own annoyance with this as a student for my own work, I'm still afraid to look at some of my earliest "work".
You have to also consider the time pressure and incentives, several times I was still stubborn about this, but then got bad grades as a result of getting only a fraction of work done. (It of course helps that you get faster with practice.)
The moral drift seems to be a bit quicker these days.
What personal scriptural deviations disqualify someone from a pastoral role in the liberal branches of mainline Protestantism these days? Obviously fornication is non-disqualifying, nor is homosexuality, nor is atheism (at least in the United Church of Canada). The Wesleyan Quadrilateral has added a few more sides since my confirmation days, what with respectability and politics becoming the larger sides of the polygon.
Here's a thought: is it possible that objective measures of "morality" (in particular, crime rates) can go down even though people are no more or less moral than before, because we have managed to outsource lots of moral duties and quandaries to the state, or solve them technologically? In previous centuries, it was a moral duty for men to carry arms to protect their families and their honor. This has become moot, even anachronistic, in places where you can call the police and expect it to protect you. In previous centuries, it was considered the duty of fathers and brothers to watch over the chastity of their female relatives, because an unwanted pregnancy would have been disastrous. With contraceptives, abortion and paternity tests, as well as economic independence for women, this has become a lot less urgent. Generally, we have managed to arrange our lives such that we need to make fewer hard decisions, placing less strain on our moral muscles.
Is perception not reality despite all the noise that emanates from the media that seems to exploit fear with the frequent telling of this or that bad event? The tolerance of theft that in past was prosecuted clearly would suggest to a great many that morality is less today than in the past; and in some cities like NY, fear of traveling on NYC subways.
you are entitled to your opinion and most I know who live there will avoid the subway system at all costs. Perception is reality. Your data point hardly determinative...
I know other people have said this before, but Livy is *right.* The Roman Republic had previously been one of the most functional-at-not-having-civil-wars states in the Mediterranean, thanks in part to Traditional Roman Morality - not the murdering women part, the ridiculous and implausible devotion to the laws part that was regularly celebrated as the chief element of the Roman state's success. That collapsed in the generation of Livy's parents with the Marius and Sulla wars and reached a nadir in his own generation with the massive, bloody civil war that preceded the transformation of Republic to Empire and the de facto return of the Kings of Rome.
I think the distinction between civil and foreign wars is legitimate! As TGGP says, I think he regarded "Rome conquers its enemies" as a sign of greatness; "Romans kill each other" as a sign of moral decay. A state that conquers other states is evil-but-functional (I don't think Livy would include the evil), a state that fights itself is dysfunctional.
And it's not as if the Rome of Augustus thought that conquering its enemies was bad, or wrong. It's just that when Augustus tried, it mostly didn't work.
I'm less convinced the problem was that Rome couldn't function without being at war, as opposed to Rome's magic civil-war-not-having powers breaking down when the ratio of citizens to noncitizens with fighting skills got too low, which would inevitably happen if it fought lots of wars without major political reforms.
(And the wars, I theorize, were mostly a matter of war heroes getting elected and reelected and celebrated and honored and getting to loot their victims and seize all their land - the state didn't need war, it just really, really wanted it.)
Oh boy. Well, you did cover most of my own issues with it.
(Still, what's up about this talk about "other countries [than the USA]", that makes many of these assumptions even worse ?? What about India and China which alone (today at least ?) make up half of mankind ? What about the Muslim world and their very different morals ? What about the fall of the URSS ?!?)
Speaking of footnote 5 and the final bit about wealth :
Well, Livy seems to have been roughly correct : in terms of state capacity, demographics and economics, the Roman empire peaked around the first century !
And I'll have to disagree about the increasing "wealth", I'm not sure what is your definition of it, but sounds like it doesn't include the consumed natural resources nor increasing environmental damage, and because a lot of people indeed don't do that is why we are still careening faster and faster in the direction of ruin, despite having been warned about it more than half a century ago !
> In Orthodox Judaism there is a saying: “The previous generation is to the next one as angels are to men; the next generation is to the previous one as donkeys are to men.”
Or consider the renaissance writers, doctors, and scholars who considered their responsibility as preserving and passing on the superior and untouchable achievements of the Greeks and Romans. In several scholarly traditions--I believe including the aforementioned Judaism but also Chinese philosophy as well as ideologies like Marxism and Objectivism that flow from a single person's writings--you see this same phenomenon. And it's extremely common in fiction, covering not just morality, but domains like architecture, science, technology, proficiency with magic if it exists, wealth/economic development, medicine, etc.
Yes, sometimes this is belief is warranted; see https://slatestarcodex.com/2017/10/15/were-there-dark-ages/. But overall it seems like it points to a bias that isn't just about shifting moral standards (also, it seems unlikely to me that moral standards shifted so rapidly in ancient Rome). I don't know if "reverence for the old" is just its own whole black-box bias that evolution gave us, or if it arises from other biases and/or common tropes in human societies.
It used to be that tradition was the most potent cultural force, see e.g. https://slatestarcodex.com/2019/06/04/book-review-the-secret-of-our-success/, so this bias was plenty justified, like they usually are. These days technology has plausibly dethroned tradition in that role, so old heuristics aren't quite as accurate.
"I can’t tell you whether morality is increasing or decreasing. But a first stab would be to note that wealth is increasing. We might expect those virtues which wealth makes less necessary, like industry and chastity, to decline - and those virtues which wealth makes more convenient, like compassion and pacifism, to increase."
The Belle Époque/Edwardian Era was a prosperous and relatively stable (in terms of internal politics of the major powers) time in Europe and in Britain. It was fulminant with antisemitism, colonial depredations, and wretched exploitation of industrial labourers as opposed to compassion, and seething popular nationalism as opposed to pacifism, ending of course in the war.
I don’t even understand the purpose of this paper. I get that it essentially functions as a piece of activism above all else, but I don’t even get who this is for. If MG’s goal was to draw more attention to progressive policy issues, this method of doing so seems like a complete waste of time. Usually when conservatives talk about moral decline, they think of things like acceptance of drugs, sex/pornography, and violence. They then point to overdose/addiction rates, depression/mental illness, violent crime rates etc. as evidence that these things are bad. They will often tie this into church but there’s a lot of secular conservatives these days as well.
So if you’re a progressive who never bought into the conservative idea of moral decline, this paper just exists to reinforce that. If you’re an intelligent conservative and read this paper to gain their perspective, you’re just going to see “oh, they didn’t even include violent crime or drug overdoses” and rightfully dismiss it. If you’re an uneducated MAGA die hard who only reads headlines, the odds of you changing your mind because “studies show morality isn’t declining” are literally zero. So this whole project, which seems like took these gentleman a considerable amount of time and effort, was really just a colossal waste of time. I don’t even understand why stuff like this gets published. Don’t these people want to be impactful on the world, and that’s why they became researchers?
I dunno, people holding these sorts of opinions seem to have plenty of impact on the world, so they must be doing something right, by their lights that is.
Yeah I get what you’re saying. I guess my point is that their impact is based upon something independent of putting together elaborate cherry picked “research” like this. I think the appeal of what we call “wokeness” is purely emotional to most people who adhere to it. I know the “wokeness=religion” point has been beaten to death, but people are catholic for the same reasons they’re woke: it gives their life meaning and makes them feel like a good person. But you don’t really see the Vatican commissioning elaborate cherry picked studies on the effectiveness of prayer (although maybe they should). Or perhaps they do and I don’t know about it, but either way I just feel like that’s very unlikely to convert anyone at all.
You’re right.... it’s just really disappointing to be honest. I don’t understand why these guys wouldn’t want to spend their time and grant money actually trying to figure something out. Then it goes through peer review and gets published by top journals. I think I speak for a lot of people when I say I went from fully believing in academia, to viewing at an institution that was “obviously biased but still credible”, to now pretty much treating new social science findings like Bigfoot sightings: “there’s almost certainly a more logical explanation here so ill dismiss until proven otherwise”. Even in “hard sciences” these issues persist, though not quite as susceptible to the ideological bend.
If you set off to actually figure something out, Problematic Implications might turn up, and then your goose is cooked. I'm not sure how conscious those academics are about risks of this sort, but, like they are fond of saying, it doesn't matter much compared to systemic issues.
The study sounds like bullshit. I have no clue if a real moral decline is in progress, but my intuition is no over the time scales I care about, but I tend to care about pretty long time scales. I don't read the current news at all, but spend quite a bit of time learning about older history, and study pre-history and even cosmology as hobbies. I tend to reflect upon things like the sheer destructiveness and pointlessness of WWI, children working in coal mines, the fact that the French used to publicly burn house cats alive as a form of entertainment. The past sure seems to contain a lot of examples of utterly careless disregard for human life and literally no regard whatsoever for any other form of life. Many cases of flat-out revelry in the suffering of others.
In comparison, it's hard for me to really think it matters that people are more likely in some countries in 2020 to have sex before marriage compared to 1950. Maybe that has more bad consequences than good and we'll find out down the line it was a bad idea, but in the face of the larger general trend whereby humans far more broadly care at all that other humans in the world are suffering and try to do something about it, that we widely care about other humans aside from our own families at all, taste-based morals don't seem like they make much of a dent. They get magnified into wedge issues and end up hotly debated because they're the only point of divergence we've got and the middle children of history feel like they need something to argue about, but they're not really that important.
As for decline in trust and general feeling of safety, I don't doubt that's quite real, but also a natural consequence of humans encountering far more unknown strangers than ever before, in part because population has increased and thus population density has increased, in part because humans are more mobile than ever and living elsewhere than where they grew up, and in part because of omnipresent telecommunications making it seem like we're surrounded and threatened by far more people than are physically near us. Other humans aren't any less trustworthy on average, you're just more likely to encounter someone who is untrustworthy when you encounter 15,000 unique people a year compared to if it was only 50. And those people might even actually be more likely to try to exploit or deceive you, not because they're worse people, but because you're both strangers and they're less likely to suffer any consequences than if you were part of a small community where everyone knew each other.
Social media tries to simulate this kind of thing with call outs and cancel culture, but it's overwhelmingly for stupid reasons, not anything that makes a material difference to any modal person's feelings of trust and safety.
HDI = Highest Density Interval = a range of values containing some large fraction of the posterior. Used to characterize the range of values that look plausible after Bayesian analysis.
ROPE = Region Of Practical Equivalence = a distance close enough to some value to be "basically the same". Used to let you do null hypothesis testing in a Bayesian framework.
"HDI within the ROPE" = almost all of the density of the posterior is so close to 0 (or whatever null) that it's basically just 0.
So what posterior are they calculating? What *parameter* has "HDI within the ROPE"? Here's their methods on study 4:
"We fit a linear model for each survey. The year of each survey was always entered as a predictor and the outcome was always the average perception of current morality. We used R^2 values as a measure of effect size. We fit Bayesian models using the Rstanarm package in R[33] and extracted the percentage of the 89% HDI that was contained in the ROPE, which was by default defined as ±0.1 standard deviations. We used the package’s default Markov Chain Monte Carlo and prior settings (M = 0, scale of 2.5)."
That's... not actually very helpful? They don't say what model they're fitting to for the Bayesian analysis, and I don't understand what the ROPE is +/- 0.1 standard deviations OF. I'd look at the code to figure this out, but it's behind a "request data" wall, and I'm waiting on the request with no idea if it will be granted.
Still, I'm a little alarmed at Scott's treatment of statistics in section II. The reason we adopted statistics in science was to prevent people from looking at graphs and saying "this data obviously shows a trend, you can see by the lines" when the data doesn't, in fact, show a trend. Turning around and saying "if the statistics say there's no trend there, then I don't trust statistics" is a red-flag move -- it could be the right move, given a strong enough data trend and a mysterious enough statistical test, but if you find yourself making it then you should halt and consider whether you should catch fire.
...is this data strong enough that we can ignore the statistical test? I wouldn't say so. We see a bunch of mean values at each time with no hint about the range or distribution within each summed-up point. This could easily be a bunch of noise. If I collected a dataset like this (I'm in biology), I wouldn't QUITE throw it out as noise without doing any statistics on it at all... but it'd be close.
I'm also a bit concerned that we're cherry-picking ONE graph to stare at out of THREE different charts at the linked source and TWELVE questions assessing general morality/trustworthiness used in the original paper.
"Still, I'm a little alarmed at Scott's treatment of statistics in section II. The reason we adopted statistics in science was to prevent people from looking at graphs and saying "this data obviously shows a trend, you can see by the lines" when the data doesn't, in fact, show a trend."
I'm with you here. It may be a bad study and inappropriate use of data, but the way we handle that is with more rigor not less.
Another theory (unrelated to this data) is just simple moral pluralism. Society is FOR SURE getting more "cosmopolitan".
Things could be getting better and better in aggregate, but also since people's individual sense of morals are getting more and more diverse, there is less and less sense other people are moral (because they don't match your standards and care about other things).
Also on the Livy thing, or really any "hey the Romans thought they were in decline" comment. The Romans often were in decline. There were a lot of ups and downs, and the peak of their geopolitical power didn't necessarily match up very well with their peak of civic mindedness and personal virtue.
This exemplifies a concern I have about the increasing tendency of traditionally scientific journals like Nature to publish hot button social science papers. It's easy to load value judgements and controversial philosophical positions into such papers, even without dubious intent, and present them as empirical, scientific findings. While it's a nice dream, to do truly scientific social science, I suspect this turn will likely do the opposite and let the problems of social science corrupt otherwise scientific journals rather than making academic social science more scientific.
On an institutional level at least, the world has become much more moral during the last decades. The ball started rolling with the 1945 agreement on the Universal Declaration of Human Rights, first in the so-called West, but increasingly everywhere. For example, the US cannot any longer sell arms to Nigeria to quell Boko Haram, since the Nigerian government cannot offer the type of guarantees that no violations of Human Rights will take place, that US weapons manufacturers need for their paperwork. (While on a mass level, the woke-phenomenon indicates that we live through a historical period ideologically dominated by a normative signals arms race, i.e. signalling-higher-morality-than-you.)
Yeah well...my point was only that human rights-based moral thinking is increasingly constraining policies everywhere. I believe this is an empirical fact that can be documented. Whether or not this is a good or bad thing (your question) is an entirely different question. My opinion, for what it is worth, is that the effects of this increasingly "morally constrained institutional decision-making" varies. Plus, it is likely to vary depending on your deeper moral outlook: Deontology (do what you perceive that you are morally duty-bound to do, with no regard for eventual negative side-effects of your decisions); versus utilitarianism (aka the ethics of consequences): In this case, giving weapons to Nigeria to do extrajudicial killings of Boka Haram (plus its local competitor, the Islamic-State Nigerian offshoot) may have better long-term outcomes than making it difficult for the government (plus vigilante groups affiliated with the Nigerian government) to do such killings - since this may, in a worst-case scenario, result in the disintegration of the Nigerian state and a descent into chronic civil war. (If this is a real risk is an empirical question, where you have to assign probabilities to different outcomes. Not easy, but not impossible either.) My empirical point in this context, though, was only that in practice human rights deontologically constrain policy-making to an increasing degree.
I'm saying that the concept of human rights in the real world has been nothing more than an opportunistic device for the US/NATO to pursue their global hegemony.
You misunderstand my comment. I was making an empirical point, not putting forward any normative statement or normative hypothesis.
Concerning your normative hypothesis: Like all conspiracy-type hypotheses is it hard to falsify, but it weakens the hypothesis that the US has been more reluctant than most countries to ratify Human Right Conventions or Covenants. For example, the US is one of only six-seven UN countries that have not yet ratified the Convention on the Rights of People with Disabilities. And the US is the only UN country left that has not ratified the Convention on the Rights of the Child.
One can always launch supporting hypothesis to explain why a country that allegedly "uses Human Rights as an opportunistic device" abstains from ratifying HR Conventions and Covenants itself. But Karl Popper rightly advice caution in putting forward supporting hypothesis to save an initial hypothesis that face contradictory empirical evidence, as that threaten to make the initial hypothesis immune to falsification.
Be that as it may, I do not wish to go into a further debate concerning possible conspiracy motives related to HR, since my point was empirical, not normative.
I'm tempted to link the quadrupling of male dropouts to skyrocketing marijuana consumption but, alas, trying it in college or at a party a couple times hardly qualifies as "use".
Although I'd appreciate the company. I suspect our adult population of tokers is closer to 10%.
I would not be surprised if 1/3 of US adults used marijuana in some form at least once a month. (I also wouldn't be surprised if the proportion was smaller, but just 10% would be shocking.)
I don't think it's true that science advances funeral by funeral, but morality very well might. I don't think it's a coincidence that there wasn't a civil rights movement for American blacks as successful as the one led by Martin Luther King Jr. until after the Confederate veterans of the Civil War were all dead.
This is one of my few worries about radical life extension: it's a lot easier to raise a child not to be a bigot than it is for an adult that is a bigot to stop being one, and you can say the same thing about other changes in people's individual morality. What other attitudes would someone born in 1800 have to change before they wouldn't be considered a moral monster by today's standards?
Moral decline seems like a pretentious phrase for “the kids these days, I tell ya.” It seems mainly that the moral standards of the prior generation becomes simply irrelevant and replaced with different standards.
"There are people in every time and every land who want to stop history in its tracks. They fear the future, mistrust the present, and invoke the security of a comfortable past which, in fact, never existed."
It seems to me there's something wrongheaded about trying to figure out whether people believe morality has declined. The effort presumes that this question, as formulated, is one that people think seriously about, and have definite views on. I'm not at all sure most people do. There are lots of big general questions like "is morality declining?": what makes for a happy life? is it important for kids to be exposed to the arts? what kind of landscape is most beautiful -- seascape, meadow, mountain view, other? are some animal species happier than others? which sport takes the most skill? is gambling completely pointless? will our species reach the stars someday?
I think most people do not have a view about most of these questions, though of course you can get people rambling on one of these subjects, and if you insist they fill in an answer or a rating of each on a survey people will do it. I for instance, do not have an opinion about whether morality is declining, and if pushed to give a detailed and honest answer I'd start by saying that I don't know what, exactly, I'd consider to be components of morality. A few things people do seem clearly bad to me, a few seem clearly good, and most of the rest seem interesting and complicated to me and when I think them over I am not very likely to be asking myself whether they are ethical or not -- I'm asking myself other questions. It's not uncommon to find someone who has a definite view about *one* of these questions, because it's of personal importance to them. I expect that most astrophysicists have a view about whether our species will reach the stars. But I really doubt that I'm unusual in being someone who does not walk around with an Opinion of Trends in Morality meter in me someplace, or a Skill Level Required by Different Sports table.
> They mention in the text that these kinds of questions did better than others; 50% report improved treatment of gay people. But what are the other 50% thinking??!
I'm not sure how closely you quoted the actual poll questions, but the wording in your post asks whether "things have gotten better", not whether "gays are being treated better", so I could imagine someone who thinks we have TOO MUCH wokeness today might answer "worse".
Most serious expositors of moral decline (or, more broadly, civilization decline) posit trends on the order of centuries, not decades. Moreover, they often hold that, over enough centuries, you see a sinusoidal pattern, with rises alternating with declines.
Thus, there is zero contradiction is holding that we have long been in a secular decline, that Livy, writing in 1st century BC, was too, and that there have been rises in between.
I was with you until section III, where you started bringing "conservative values" into the equation. Yes, it's true that if you focus on the subset of moral values that were considered important by people in the 1940s but aren't considered important by people in 2020s, then it will seem as though morality has declined over the past 80 years. By the same token, you could just as easily argue that by the standards of 2020s progressives, morality (i.e. opposition to racism, sexism, and homophobia) has actually increased over time! But these conclusions don't say anything interesting, and they don't really answer the question except in a frustratingly narrow sense. All it really amounts to is an acknowledgement that some of society's dominant values change over time, which is obvious to anyone who's even mildly familiar with history.
To actually make the question worth asking or answering at all, you need to focus on the moral values that *don't* change, the ones that transcend the political divide. Look at the things that people in the 1940s *and* the 2020s would both agree are good, and see if people are doing those things less. Look at the things that both conservatives and liberals would agree are wrong, and see if people are doing those things more. That's precisely what the studies you quoted were trying to do (both 40s conservatives and modern progressives would agree that violent crime is bad), so criticizing them for ignoring partisan and period-based standards of morality is completely missing the point. Not including those standards isn't anti-conservative bias, it's just neutrality. (If they'd really had a progressive bias, they could've just compared the number of Black, female, and openly LGBT Senators in 1940 vs. 2020, and then claimed the dramatic increase as proof that people were actually much more moral than they used to be.)
The study includes this poll question as an indicator of morality:
"Compared to the past, have things gotten better, worse or stayed the same [regarding] treating gay people with respect and courtesy? (2002 vs. 2013)"
How is this a moral value that doesn't change and transcends the political divide?
Also, moral values that change are still moral values! You can't declare someone born in the 1940s wrong for saying morality is declining if they're correctly perceiving declines in values he cares about, just because people in the 2020s don't care about those values.
Okay, here's an example: Let's say a devout Christian is trying to convince people that society keeps getting worse as a result of the ongoing decline in religious beliefs. If someone asked him to provide evidence for that claim, and his reply was "just look at how church attendance rates have dramatically plummeted," that would be a poor argument, because no one who isn't a devout Christians themselves is going to care about church attendance rates. It's only convincing to people who already agree with him! If he wants to make an argument that's convincing to anyone else, he needs to appeal to shared values that aren't exclusive to Christianity, and prove that *those* values are also declining as a result of secularism.
My point isn't that conservative moral values aren't "real" values (whatever that means). My point is that, if you want to make a case that doesn't boil down to the tautology of "the decline in conservative values is bad because I personally like conservative values and think it's bad to have less of them," then you need to ground it in some value system that both conservatives and non-conservatives can agree upon. Otherwise, it's not an argument, just a lamentation.
The opposite study would not be taken very seriously, I imagine. "We find that polls consistently show that people think that the poorest among us are being more and more taken advantage of. We show however through objective metrics that the average inflation adjusted income of the poorest among us from 1949 to today is in fact ... . This is important because a prolonged false belief in economic decline can redirect scarce resources away from the pursuit of spiritual endeavours which - given our position in Mazlow's hierarchy - are those of greatest importance in the present era".
TBH (and I know this is a total tangent, but whatever), I think Maslow's hierarchy errs in putting physical needs as more fundamental than spiritual ones. Maybe at the literal subsistence level ("Am I going to starve to death?") they are, although once you get past that point, people who feel their life is meaningful seem generally better able to put up with poverty and physical deprivation than rich people are able to put up with a sense of meaninglessness and ennui.
The point about the specific morality of different periods is valid, but also points directly to how "morality" is being changed. This is indirectly alluded to in noting that various standards and behavior concerning racial bias, religion etc have lapsed but it does not automatically follow that modern "moral" practices are superior. They are just different.
It does also appear - very obviously to such as myself or those who similarly do not share PMC "morality" - that said modern standards are defined as being inferior to the past.
A simple generic example would be marijuana use. While I personally don't care about marijuana use one way or the other - the fact is that the laws governing its use have been under assault for decades and the 49% referenced are *all* breaking federal law still.
Yes, people drank alcohol too when it was illegal by Constitutional amendment 100 years ago - but they didn't pretend it was moral.
Is there a reality of social scientists talking themselves stupid? This is a good example in a long litany clearly and obviously silly work being published. I truly see it as some of the most base and clearly biased nonsense which is akin to a toddler whose face is covered in cookie crumbs insisting that it isn’t and they don’t know what happened to the cookies.
The nuclear article opens with a similar set of conceits about the apparently super powerful anti nuclear activists who stopped a giant industry in its tracks. It just sounds so dumb when thinking about all the other ways in which everyday activists have failed to stop powerful industries, even after clear and large scale harms have been inflicted on them and their children. When we engage in intentionally obtuse and extreme thinking in isolation about observing anti nuclear activists, dropping public approval polls, and the failure of the nuclear industry to expand…then it is obvious to them how a story shapes up of ultra powerful activists who the government listened to and enacted laws to do what they wanted before anything bad happened to anyone. As if!
Except you know…how this almost never happens in any other topic. Could it be…nuclear actually is unsafe?! Hard, expensive, impossible to ensure commercially, and involves extreme transportation risks if larger amounts of material were being moved around? Hmmm, this thinking in isolation is truly absurd. I call this taking themselves stupid. This is the same bone headed nonsense sham logic which calls for and demands published studies on topics like animal intelligence or if babies feel pain. When any mother or anyone who works with animals can tell you the answers to these obvious questions. Nope, get lost thousands of years of experience, the real arrogance of the self appointed experts are now on the scene to ‘study’ things.
Yes indeed if you check in with common people, they can be trusted and are correct about their own lives and beliefs. This gating off of knowledge behind ‘expertise’ is absurd. The intellectual crime of credentialism and incredulous attitudes towards anything outside their own contrived orthodoxies is a plague on progress and a waste of resources.
I've tried to edit this twice, and substack has immorally made the comment disappear.
Security has costs. Locking and unlocking a door takes time repeatedly. Losing a key or other similar failures takes more time, and it's unpredictable.
In cold climates, people seem less likely to lock their doors, presumably because they don't want to leave their neighbors outside to freeze.
When I lived in Newark, Delaware (a medium-sized college town), there was a while when, if I mentioned it, people would get angry at me. I'm not sure why, but I think it was because they didn't want to hear me complaining if I got burglarized.
Good idea. I just found that trying to edit a comment makes it disappear. Note that this issue is about editing comments, not about writing them the first time. Do you save comments when you edit them?
Thinking about the connotations of "moral decline", shoplifting has become an organized crime project rather than just individual decisions. Shoplifting has become a much more serious problem, but I don't think people think of organized crime taking up shoplifting as moral decline. Or do they?
I think of shoplifting specifically as more of a legal than a moral issue. A lot of the places where you see these organized shoplifting rings have de facto legalized shoplifting, and have de facto (or even de jure) illegalized the defense of property with force. And/or don't have the law enforcement resources to address the problem.
Which is why, with perhaps the exception of a few violent offenses, I'm skeptical of crime-as-proxy-for-morality. A society where police vigorously pursue shoplifters and every other shopkeeper has a shotgun under the counter is going to have a lot less property crime than a society where those things aren't true, even if the two populations have identical moral character.
I don't see how two societies could have the same moral character but different rates of shoplifting. Someone who chooses to steal is exhibiting worse morality than someone who doesn't choose to steal, regardless of whether their choice is influenced by the goodness of their heart or the shotgun behind the counter or the fear of post-mortem hell.
As with everything here, this quickly devolves into a semantic argument about what morality is. But no, I don't draw much moral distinction between a person who steals and a person who wants to steal but only doesn't because they fear punishment. The consequences are different, but the person who is deterred from stealing is no better on a personal level.
Fewer people get married. They get married later. And it ends in divorce far more often.
In 1950, what % of children under the age of 18 were living with both married biological parents? What is that % in 2020?
This plus probably the #1 thing people notice. There is no fudging it.
Even those social classes that partially reversed (but didn’t completely repair) the divorce rate did so mainly by delaying marriaige and having well below replacement fertility. People in 1950 were having 2.5 kids young and still keeping it together.
If the lack of chastity wasn’t correlated (causing?) the lack of stable marraige people wouldn’t care, but many see them as linked.
About 1/3rd of divorces are a result of abuse. These were pretty easy to end in the "fault" era as well.
About 2/3rds of divorces aren't the result of abuse. These were enabled by the "no fault" era.
Child outcomes match this pattern. If the divorce ended abuse, it improves child outcomes. If it didn't, it retards child outcomes (also, divorce tends to end in abuse from boyfriends).
No, it happens in our lifetimes. Take support for gay marriage in the US. It flipped in a span of 10-15 years, and it’s people that used to be against coming around, not just the old guard dying of old age.
> They mention in the text that these kinds of questions did better than others; 50% report improved treatment of gay people. But what are the other 50% thinking??! The answer has to be something like “2002 to 2013 is too short a time to measure even extremely large effects that were centered around exactly this period”.
Well, presumably some of them are thinking "the amount of 'respect and courtesy' that must be extended to gays now is well in excess of what would be appropriate, which is a change for the negative compared to the past". Increased respect and courtesy aren't automatically good things. The question was whether things have gotten better or worse, not more or less courteous.
The last paragraph in their discussion is interesting. They discuss how the illusion of moral decline could lead to what, for lack of a better term, seems to be in their eyes actual moral decline “If low morality is a cause for concern, then declining morality may be a veritable call to arms, and leaders who promise to halt that illusory slide—to “make America great again”, as it were—may have outsized appeal.”
Then they end by saying “ Achieving a better understanding of this phenomenon would seem a timely task.”
If their thesis is correct, then there’s nothing “timely” about this task- the illusion of moral decline has been affecting human society for more than 2000 years, including in the periods where we founded democracies, abolished slavery, and fought Hitler. If this illusion had been getting people to focus on imaginary problems and choose bad leaders, then it has been doing so throughout history , well before the 2016 US election.
“This phenomenon” - the one they went looking for and - surprise! - found? The conclusion doesn’t fit the findings, you say? Consider that this whole exercise was not a disinterested search for “understanding,” but confirmation bias masquerading as new knowledge. It adds exactly nothing to our understanding of human history but presumably the researchers get pats on the back and more funding. I’m sure there are more funds available when you make sure to tell your funders what they want to hear.
It's easy to notice what has gotten worse -- e.g., when it comes to spectacular crimes, there are more school shootings and other mass shootings than when I was a kid in the 1960s-1970s. On the other hand, there appear to now be fewer political assassinations, bombings, skyjackings, kidnappings for ransom, and bank robberies. But it's hard to remember what isn't around much anymore.
Growing up in Los Angeles, for example, RFK was assassinated at L.A.'s most famous old hotel the night he won the Democratic California primary when I was nine, the Manson Murders of Sharon Tate and friends were a few miles away when I was ten, and the LAPD burned down the Symbionese Liberation Army's house (but kidnapped heiress Patty Hearst wasn't there) when I was 15.
Was Los Angeles crazier around 1970 than it is today? I'd guess ... probably, but then who really knows? I'm not as entertained by the local news as when I was a kid, so I can't really compare fairly.
I'm a pretty young guy, and I was gobsmacked to learn that in the early 70s there was an 18 month period with 2,500 domestic terror bombings in the US. Casualties were relatively low, granted, but still...
Trust and tolerance are inversely proportional. As social tolerance for differences improves trust in society declines. Trust is a function of being able to understand what others around you are thinking, which is hurt by tolerance of differences of upbringing.
I am not sure that is true, or at least I think it leaves out important bits. Specifically, trust has a lot to do with doing what your say you will do. Impart I suppose that implies that people will say what they are thinking, but it also means they honor commitments made previously that you might not be present for. You can be very different from me in many ways, but so long as I think you will follow your word and commitments I can trust you.
Moral decline sounds like it fits: All the mature decent old people keep dying off and being replaced by these immature babies with no self-control. Been happening forever, kinda weird.
There has been moral progress due to advancements in science, technology and knowledge. There is less suffering today than in the past and more flourishing.
Re footnote 4 - That's kind of the point of Haidt's "The Righteous Mind" - liberals don't realize some people have more moral "flavors" than care and fairness. So they can ask people about morality and not think about other things like tradition, sanctity, etc.
It doesn't sound like a very good study, but I don't think the perception of declining morality has much to do with declining morality however it is "measured". There are two ways to interpret all those reports of declining morality for the last several thousand years. Nehemiah anyone? Morality can indeed have been declining since the Garden of Eden or Olduvai Gorge or many, possibly most, people in every age have had a sense of declining morality. That latter should not be a surprise. We are taught in every society to develop a moral sense of what is right and what is wrong. It's part of our training to live in human society, and I can't think of a culture that doesn't frame human behavior in terms of moral judgement. Despite this, no society has ever lived up to its moral precepts. People aren't easily programmable robots, and moral codes are always full of conflicts, contradictions and compromises. Thou shalt not kill, unless you serve in the military in which you might get a medal for killing or executed for failing to do so. That means, as people age and experience life as adults, they are exposed to a world far from that about which they were taught. Most people do follow their childhood moral precepts to some extent. Society would be much worse than it is without that. However, almost all of us wind up compromising on some points. Worse, many people violate those precepts but only some of them are punished while others thrive. If you don't have a sense of declining morality as one ages, you are either extremely well grounded or simply oblivious.
P.S. I was reading a study, "The Age of Anxiety Birth Cohort Change in Anxiety and Neuroticism, 1952-1993", which addresses anxiety in teenagers. "The average American child in the 1980s reported more anxiety than child psychiatric patients in the 1950s." More recent reports on the adolescent anxiety suggest that the trend has continued, though, I'll add cynically, with climate change replacing nuclear war as the big bad. I'm sure my parents were all relaxed and mellow when war broke out in Europe and Asia during the Great Depression and great^N grandparents positively euphoric in the face of the Revolutions of 1848 and American Civil. As with adults facing declining morality, could it be that adolescence is a time of anxiety?
>In the 60s, in the city center, they felt comfortable walking alone at night. Now, in the suburbs, still they feel comfortable walking alone at night.
ROPE and HDI are Bayesian statistical terms. ROPE = Range of practical equivalence. HDI = Highest density interval.
The gist of it is that you pick a minimum effect size that you would be willing to consider interesting, the authors of this paper chose +/- 0.1 standard deviations which I believe is the value Kruschke recommended for very conservative analyses (See the Kruschke paper from 2012 titled something to the effect of "Bayesian analysis is better than T-tests, you stupid losers" for more on this).
You then permute different values from posterior distributions of group means and sample standard deviations to produce a sort of ad hoc posterior distribution of effect size.
The HDI is the smallest portion of this effect size distribution whose integral is equal to 0.95. In theory (and if I remember correctly), the percent of the HDI that falls outside of the ROPE is roughly equal to the percent chance of a significant result.
The fact that the authors chose a ROPE of +/- 0.1 is a point in their favor since they're really setting the bar low for a significant result. However, this method still relies on a lot of steps that are susceptible to researcher bias and I'd have to read more of their paper than I have time to in order to get a good grip on how trustworthy their work is.
I will say, when I studied this method several years ago the first thing I said to the person reading the Kruschke paper with me was something like "Wow, somebody could definitely use this to bamboozle a paper reviewer into publishing nonsense one day if they wanted to."
Additional note. This method is also commonly executed through an R package that no one except Kruschke himself actually understands. I believe it relies on a quadratic approximation method for finding posterior distributions. I recall another person criticizing that approximation once in the past as well. I failed to understand the R package a long time ago, so I can't truly vouch for that. Just throwing it out there.
Many would argue that women's sexual self-determination (including the option to sell nudes) was a major step forward in sexual ethics and an important aspect of ending women's oppression. (I personally would add a couple of caveats, but the point is, "all-time low" depends on your point of view - which is what Scott is aiming at, I suppose.)
Well, "honesty" and "kindness" components of it were discussed. Do you anticipate greater success at grounded definitions for those?
Sure, and everybody reasonable would agree that it's an important component of a "morality" quotient.
Any sermon worth a damn has strong rule of law upholding it, but it still does well to remind people. And yes, morality is basically applied game theory, but admitting this is a weak move in the game, especially if you wish to change the rules! So the whole convoluted mess is sadly inevitable.
I mean, you can assert that you’ve discovered the universally correct moral code, but can you prove it?
I'm with Mike on this one, while we haven't found the exact best moral system (and it would be nigh-impossible to prove), I don't think we should be shy pointing at some alternative systems and saying they're worse. Killing your drinking wife, slavery, and burning of witches are all things I'm happy to assert are wrong, not in my eyes or my my system, but by some universal and external morality.
Sam Harris' book The Moral Landscape has largely shaped my views here, good read.
I am 100% on board with a universally applicable external morality that is objective rather than subjective, but it’s dangerous territory. If you’re going to go there, you’ve got to be prepared to defend it.
I think ‘the violent death rate of humans’ is a pretty weak assertion. It doesn’t automatically defend itself. Is a universe where I lock every human in an invulnerable self-contained box that prevents them from doing anything but dying peacefully of old age the most morally correct universe? Is it even morally acceptable? Or is it a universe where I am a loathsome slaver, even if I’ve saved countless lives?
This sort of hand-waving of what morals Should Be based on passionate intensity about avoiding harm is terribly weak, even if it seems to play well in a soundbite.
I agree - I should have been more clear, I'm with Mike's first point of "it's reasonable to claim some moral systems are superior to others", but I don't agree with the second point that 'violent death rate of humans' is the best metric, though I do think in practice it's a good heuristic. I agree that minimizing harm, while also generally a good heuristic, is always woefully insufficient and sometimes actively misleading as a moral principle.
Do you propose any other simple metric that you think performs better than 'violent death of humans', or a guide to better action superior to 'avoid harming others'?
I think it’s pretty poor form to claim to have discovered the one true objectively correct morality without showing your work, unless your argument is so well-known or traditionally established that you can take for granted that everyone already knows your reasoning. It smacks of skipping steps and I’m calling him out for that.
Especially because I think ‘prevents violent death’ is a very badly-performing metric. You can easily justify arbitrary actions on the basis that you think it might prevent an increase in future violent deaths, including a lesser number of violent deaths. It’s an all-purpose rationalization for any desire of your secret heart that needs justification. It’s also pretty badly irrelevant to most issues of moral corruption in quotidian life: won’t tell you much about whether it’s bad to hate someone or steal a cookie from the jar or return a lost wallet, not unless you engage in pretty tendentious reasoning.
No, if you want a safe and simple heuristic what you want is probably something exclusively deontological: don’t murder, don’t steal.
Easily. Just kill anyone who disagrees!
I'm not sure if the universally correct moral code will ever be provable, but I do propose a starting point for deriving it: https://twitter.com/DPiepgrass/status/1645498292552503301
Setting aside morality, the accuracy of the term "illegitimate" has greatly declined. The birth of a child in vs. out of wedlock used to have very significant legal implications; now, much less so.
The complexity is that being against illegitimacy cashes out as cruelty to illegitimate children, who are surely not responsible for the circumstances of their birth.
True, they unfairly carry that stigma, but the charge of illegitimacy is leveled against the perpetrators, which discourages it. Then the consequences of the perpetrators is bodied forth in the children.
The phrase "illigitimate child" inherently levels the charge against the child, and nobody says "illegitimate parent" or "illegitimate sex". Maybe better to talk about "unprotected sex" instead?
"Absent father" (Or mother, less frequently) directs the blame at the correct target and also hinges on the correct property, i.e. the level of parental support, rather than the formalisation of marriage.
It is. On the other hand, the loss of the stigma against illegitimacy has resulted in a lot more children growing up without both parents, which is even worse.
In 1965, 75% of black babies in the US were born to married parents; now it's about 12%.
>There are tons of things that everyone naturally assumes and take for granted until an autistic science man does this one weird trick of asking himself "wait a minute, is that true actually? let's think about this rationally and do some tests", and then it turns out the world isn't flat, the planets aren't orbiting around Earth, time isn't the same everywhere, dreams are not prophetic, and free will is at best a fraught concept (normies hate him!)
Yes, but Scott said "probably".
Consider the numbers. Autistics are rare. Moral guardians are common. And the Pope is much more likely to endorse moral guardians' denunciations - at least, the half on his side - than an autistic-produced heresy.
This is a study *published in Nature*. Nature might as well be a papal bull; it's probably #1 authoritative source and also has a political agenda. It's *possible* for an autistic heresy against the established thought to be in there... but for a given social psych article tackling common wisdom to be such heresy is a lot less likely than for it to be somebody gleefully burning his enemies.
https://slatestarcodex.com/2013/06/22/social-psychology-is-a-flamethrower/
I think one has to distinguish between the political/economic/cultural and the moral power of Rome. There's no reason why Roman society couldn't be at its peak in a time of moral decline. It used to be thought that Britain's greatest periods were the late 16th and 19th centuries; but many people in both periods regarded themselves as in a time of moral and social decline, and in the Elizabethan era they were probably right. Similarly, the Romans looked right back to their patriarchal roots when they talked about moral decline: they cared about the fall of virtus, not about the increasing number of marble buildings or the quantity of grain imported to feed the burgeoning lumpenproletariat. Perhaps they idealised what was almost a pre-historic period to them; but one can't object to them judging themselves by other metrics than wealth and power.
In fact, I wouldn't be surprised if political/cultural progress and moral decline went hand in hand. That's the traditional view, at any rate.
https://i.pinimg.com/originals/25/bd/8b/25bd8b7f6e57cdfd17747b25d753b2ce.jpg
But in all seriousness, I think you might be a little bit harsh on the poor old Japanese. It doesn't seem impossible that as a country becomes more civilised, genuine virtues might become unnecessary or even unhelpful. The instinct for independence, for instance, and the self-reliance that goes along with it. Or the toughness engendered by poverty and war. Probably to the Japanese and the old Europeans, certainly to the Romans, it appeared that centralisation had brought in a new world where scrounging, flattery, greed and cowardice were the way to get ahead.
The military virtues of the inhabitants of Rome were in decline as the imperial capital became ever more secure and wealthy. Imperial military power tended to come from further out. Trajan, for example, was born in Spain.
<i>There's no reason why Roman society couldn't be at its peak in a time of moral decline. It used to be thought that Britain's greatest periods were the late 16th and 19th centuries; but many people in both periods regarded themselves as in a time of moral and social decline, and in the Elizabethan era they were probably right.</i>
Funnily enough, I was reading a book that touched on this topic just a few days ago. The author blamed it on a change in preaching. Apparently late medieval sermons were mostly exhortations to good behaviour rather than expositions of theology, but then Protestant reformers took control of England and decreed that, from henceforth, sermons were to focus on points of Protestant doctrine rather than morality.
That's fascinating, could I ask the title of the book?
Like most people I assumed without thinking that doctrine became less important after the Reformation.
"Like most people I assumed without thinking that doctrine became less important after the Reformation."
And that is fascinating to me. If people were furiously debating about "your theology of baptism is wrong" (and didn't confine it merely to debate), why would you think they didn't care about doctrine? That it was a simple message of "Now the Bible is in the vernacular tongue, read the Bible, and follow your conscience"?
Diarmuid MacCullouch has an entire book on the Reformation (and this is only confining it to Western Europe) where the wars of doctrine are gone into in detail:
https://www.amazon.co.uk/Reformation-Europes-House-Divided-1490-1700/dp/0140285342
Thejre's also his biography of Thomas Cromwell, and the careful path people had to steer around King Henry VIII - both Catholics and various Reformers could find themselves burned at the stake:
"Now, in autumn 1538, Lambert confronted a prominent London evangelical and royal chaplain, John Taylor, with outspoken scepticism about the bodily presence of Christ in eucharistic bread and wine. Taylor called on Robert Barnes to help him defend a real-presence theology which avoided papal error (Barnes was, after all, the most obvious and authentic Lutheran in all England), and he then brought in Cranmer. The Archbishop prudently put Lambert in confinement again – but all in vain: fatally convinced of his own rightness, Lambert appealed to the King to hear his case. This was a disastrous misjudgement.
Henry’s customary inclination to occupy himself with theology when lacking a wife made him take a particular interest in the case, and his mood was currently veering towards the conservative end of his volatile spectrum. That was apparent from a new royal proclamation on religion: a personal public intervention, sidelining his Vice-Gerent, who one might have thought had already produced enough regulation for the Church less than two months before. The proclamation followed up various of Cromwell’s orders, and repeated condemnations of Anabaptism and Becket, but it also imposed censorship on the printing press, including unauthorized versions of the Bible, and it expressly forbade clergy to marry – a reaction to the fact that in southern England a number of clergy were doing just that (not to mention the Archbishop of Canterbury’s wife Margarete, lurking obscurely in one of his palaces in Kent).
Even if we did not possess a draft of this proclamation emended in the King’s own hand, the general shapelessness and theological incoherence of the final version is redolent of brusque royal papering-over of disagreements among his bishops. Worse still for John Lambert, this document was issued on 16 November as part of the theatrics in the most high-profile heresy trial that early Tudor England had seen, with Lambert himself and King Henry as joint and opposed stars of the proceedings. The Supreme Head of the Church of England chose to preside himself over the event in Westminster Hall, symbolically clad in white, with his bishops merely as assistants to undertake the theological detail of prosecution. Cromwell’s only substantial part was to house the condemned prisoner, presumably at The Rolls, before Lambert was taken to the stake at Smithfield on 22 November: the same fate as Forest had suffered there six months before, but for polar-opposite beliefs.
The whole Lambert business hugely embarrassed John Foxe when he wrote it up in Acts and Monuments, given that it implicated some of his chief Protestant heroes in burning a man who looked in retrospect like a good Protestant. Cranmer in particular has come in for plenty of abuse for inconsistency among later writers. Yet the Archbishop’s own theology of the eucharist at the time was opposed to the views of Lambert, who may also have affirmed some real radicalism on infant baptism and the nature of Christ, and the Lutheran princes of Germany expressed no disapproval of the condemnation. Cromwell kept his counsel. Two days later, effectively in a continuation of the same theatre, Bishop Hilsey returned to Paul’s Cross to deliver a definitive exposure and mockery of the Holy Blood of Hailes, this time with the relic on hand as his visual aid – in careful pairing with this symbol of old error, new error was represented by four immigrant Anabaptist prisoners standing beside the pulpit bearing their heretics’ faggots, preparatory to burning at the stake. The occasion was a necessary act of damage limitation for the evangelical establishment in relation to King Henry."
Thank you.
The Shape of the Liturgy by Gregory Dix.
I was under the impression that the Victorians thought they had the right ideas about sexual morality (and social issues like slavery) and they were distressed by the lewdness of earlier culture like Shakespeare and 18th century writers.
I don’t think the Victorians in general saw themselves as in a time of social decline. There were some people who were distressed by the loss of religious faith (like Matthew Arnold I believe).
I guess that's right, my apologies. I was thinking of people like Arnold and Froude, who were both elitists of a sort. So "many people" isn't at all accurate.
If Rome can be in a golden age while also being at the peak of moral degeneracy, that implies that moral degeneracy has nothing to do with whether a country is a good place to live, so maybe people should chill about getting the government to enforce morality.
But perhaps golden ages aren't good places to find oneself in. "May you live in interesting times". Rome at her peak was filthy, miserable and unstable, a little bit like Elizabethan England. A booming population coming to the cities to join the lumpenproletariat, tyrannical government, constant civil war (or in the English case, plots and persecutions). Enough to make any citizen long for the good old days of Cincinnatus, or merrie England...
Ada Palmer argues that the Italian Renaissance wasn't a good time for most people to live. High status people were showing off with the great art we value, but they were showing off because they were competing hard in unsettled times.
There's a famous line delivered by Orson Welles in "The Third Man" that is of possible applicability here ...
Well if they don't think the golden age of Rome was a good time to live, then perhaps people who want moral righteousness should stop talking about how [insert modern problem] caused the fall of the Roman empire.
Or it explains why the country peaked (and then went down) instead of continuing up. We should not expect moral decline to happen when things are at their worst, but at their best. People become complacent when their lives are easy. People try harder when the negative effects of their actions cause them harm.
Livy lived from 59 BCE to 17 CE, the Roman empire hit its furthest extent under Trajan (98-117 CE). There were a hundred years and twelve whole emperors between when Livy lived and when the Roman Empire hit its peak.
Ideas don't last long enough to become traditional moral values if they aren't pro-survival or at least pro-flourishing, which implies that abandoning them is anti-survival and anti-flourishing. So, the richer and more powerful a society is, the more "moral degeneracy" it can tolerate without everything immediately going to pot.
But golden ages don't last forever, and the one we're in now is fundamentally unsustainable.
Hypothesis: Countries are apt to be at their peak when they're using up the moral capital developed in earlier ages. I'm not sure this is true, but it sounds good.
I think that's right. Developing moral capital is slow and boring. How much do people care about the eighteenth century these days? But it was then that Europe built itself up after the chaos of the past two hundred years, in preparation for the much better-remembered industrial civilisation of the nineteenth century.
I wonder whether the perceived “lack of morality” that people feel is associated with the decline of religion (as Nietzsche famously said, God is Dead). Without that external rule of religion, people may still be moral enough not to mug others/do bad things that the poll questions asked, but their actions may not fully align with *another* individual. And so to this other individual, the world is less moral. (Note this other individual doesn’t need to be religious either).
I completely agree that this study tells you more about the researchers’ bias rather than the participants’ bias. There is definitely shoehorning of findings, based on shaky assumptions of what these polls are actually reporting.
I think the point of including the opening quote is that a sense of decline in morality is pretty universal. I am pretty sure Savonarola, Torquemada, and Martin Luther all felt like morality was in steep decline, even if they lived at a time when Christianity was as strong as ever in Europe and their confirmation bias saw evidence for this decline in different places.
But, yes, I think many people think like this. You can hear the decline of religion lamented not just among the usual suspects on the religious right, but from more moderate voices, and even implied in secular discussions about despair and lack of meaning in working class America, and among secularists who think religion is a good idea.
Many people point to a God-shaped hole in our souls and society. They don’t see that they’re really talking about a hole-shaped God – almost infinitely plastic, like putty, to fill any gap.
And that, I think, goes to Scott’s point:
Even a society of perfectly moral, pious individuals would never be a perfectly moral society (by its own standards) for long – in large part because morality is subjective and a moving target with blurry boundaries.
Even within a religion, there’s always going to be a pulsating mismatch between any individual’s or congregation’s fixed definition of morality and that shared by the larger community. You can, for a while, fill the seams with more God-putty and dogma to cover up the poor fit, but you will have to lay it on pretty thick over time (inquisition-style), and eventually you’re bound to get a giant rift (reformation-style).
Insightful - thank you for taking the time to share your thoughts. I wasn't aware that similar sentiments about declining morality existed even in more religious times. I think the final couple of paragraphs that you quote hit the nail on the head.
> I think the point of including the opening quote is that a sense of decline in morality is pretty universal. I am pretty sure Savonarola, Torquemada, and Martin Luther all felt like morality was in steep decline
Alternative hypothesis: the sense of moral decline isn't universal, it just occurs at particular times and places, like first century Rome or sixteenth century Germany or late 20th/early 21st century America.
There's other long periods where morality stays steady (by the standards of one generation going into the next) or may actually increase. Do we see complaints of decreasing morality from, say, the early Victorian period in Britain?
Agreed, history is full of examples of Revivals and Awakenings where most people at the time would say that morality improved with it. I do think complaints of more deline are more common, but there are quite a few times it was perceived to have increased. Here's a list of just Christian ones, and I believe it's true for nearly every religion. https://en.wikipedia.org/wiki/Christian_revival
The early Victorians thought things were as bad as ever or worse. The mid-Victorians were sure that there had been a lot of progress. They were rather pleased with the fact that people were no longer expected to tolerate or encourage serious drunkeness at dinner parties, and that nobody tried to bore them with stories of their sexual conquests under the impression that they would approve or be envious. (Both of these had been characteristic upper-class vices of the Georgian period) They knew how much crime had gone down since the police were established and knew that the lower crime statistics they had included a larger proportion of the crime that was actually happening. And while there was still a big poverty problem at the end of the Victorian period - it was a poverty with cleaner homes, shorter working hours, some education and much less child labour.
My impression is that the whole 1860-1914 period in Britain was characterized by increasing moral optimism.
“They were rather pleased with the fact… that nobody tried to bore them with stories of their sexual conquests under the impression that they would approve or be envious.“
I would be rather pleased too, if that happened.
The Victorian English were morally superior to the Regency English on most objective measures.
Certainly possible. I don’t know the moral values and attitudes throughout world history well enough to reject that hypothesis.
To be clear, though, I never meant to imply that I thought everyone, everywhere had this sense all the time. So maybe universal was the wrong word. I should have said “very, very, very common” 😉
However, the strict Victorian morality actually strikes me as a typical reaction to a sense of moral decline, though I can’t point to a specific quote to back that up. But by the early 1800s, society was changing very rapidly in every way. Probably too fast for some. The world had been (and was still being) torn apart by revolutions, war, industrialization, and enlightenment ideas and that challenged ancient truths and values – even God. Victorian morality itself was probably not considered moral decline by many (even if some writers seemingly had issues with all the vanity and pretensions), but it was born in a bubble of privilege, surrounded by a world effectively carved into our cultural imagination by Charles Dickens – rife with crime, prostitution, and poverty (which was often considered a moral failing). If Victorian-era elites didn’t didn’t see the road to hell when they looked out their carriage windows, and feel morally superior to their contemporaries, I’d be surprised.
Regardless of which hypothesis is closer to the center of the complex bullseye, however, we probably agree that morality is a moving target to some degree, and that it likely moves at different speeds at different times and places.
>However, the strict Victorian morality actually strikes me as a typical reaction to a sense of moral decline, though I can’t point to a specific quote to back that up
Yes, a reaction to the moral decline of the Georgian era.
>Surrounded by a world effectively carved into our cultural imagination by Charles Dickens – rife with crime, prostitution, and poverty (which was often considered a moral failing)
Dickens' books are what a an effective propagandist response to plead the masses to cease their moral decline looks like.
Moreover, Victorian era was quite long and so was Dickens' career. Dickens started serializing Oliver Twist in 1837, which is the same year queen Victoria acceded to the throne. Twist was sort-of contemporary, describing workhouses set up by the Poor Laws of 1834.
However, say, David Copperfield is published in 1850s and it no longer is a contemporary, but *autobiographical* novel, which is to say, about past. Great Expectations, published in 1860s, is set (or rather begins) during the Napoleonic Wars. For example, by 1860s (if I am reading Wikipedia correctly) the prison ships (featured in the novel) were no longer in use in Britain.
Interesting. Thanks.
One theory I heard is that Victorian morality represented the rise of bourgeois, middle class values. Aristocrats in the 18th century don’t care about being respectable because their social position doesn’t depend on that, but the bourgeoisie has to prove its respectability, presentability and merit.
In my opinion Dickens’ novels are more a response to the social problems caused by industrialization than to moral decline in the population. Poor people (and unpretentious middle class people) in Dickens are presented as possessing a kind of innate morality, with the exception of the obvious rogues.
"Maud", by Tennyson, leaps to mind as an obvious example of early Victorian complaint about moral decline - the complaint is in character, but I'm think he's speaking at least partly with an authorial voice.
I wouldn't say that Christianity was a strong as ever. A lot of people were already crypto-pagans thanks to the Renaissance.
Depends what we are talking about, "Christianity" is/was huge in the West (including Russia). Like, you find Christian modes of thinking in the most unlikely places, like the writings of French revolutionaries, Karl Marx and Ray Kurzweil !
Russia is less religious than the US (and Ukraine).
This really isn't true until the mid seventeenth, actually. Classical culture & christianity simply weren't in tension during the renaissance proper
Pretty sure? Do you have sources or quotations to back that up? In the case of Torquemada, he would have witnessed the rise of Protestantism which he would presumably have viewed as a catastrophe, which would not have been confirmation bias.
I’m not sure we disagree.
Yes, I think Torquemada would consider the rise of Protestantism to be a sign of moral decline. I am not sure it actually *was* moral decline, but he would interpret it to be. This is all mindreading, but I would consider that confirmation bias.
I also suspect that the "quote by Livy" is probably made up.
(Hint: the authors of the article do not reference a specific book by Livy.)
The quote by Livy is not made up. It's not in a specific book, but in the preface: https://pressbooks.claremont.edu/clas112pomonavalentine/chapter/livy/
"with the gradual relaxation of discipline, morals first gave way, as it were, then sank lower and lower, and finally began the downward plunge which has brought us to the present time, when we can endure neither our vices nor their cure."
Livy was writing right after the Roman republic, which had been stable for centuries, broke down into a century of increasingly violent rioting, followed by increasingly deadly civil wars. The last orgy of violence ended with Rome becoming a monarchy in all but name. If Livy didn't think morals were declining, I would wonder what he was smoking.
Thanks!
"Many people point to a God-shaped hole in our souls and society. They don’t see that they’re really talking about a hole-shaped God"
What a great line. Thanks!
"Many people point to a God-shaped hole in our souls and society. They don’t see that they’re really talking about a hole-shaped God – almost infinitely plastic, like putty, to fill any gap."
I don't really think that works as a come-back. We're talking about a hole-shaped God? OK, then, but -- why a hole-shaped *God* -- why not a hole-shaped literature, or music, or drugs, or imaginary friend, or relationship, or approach to sex, or whatever else? Those things are pretty plastic too, and yet it's a *God*-shaped hole that people talk about, and find that they can fill.
One might say, "People talk about a key-shaped hole, but it's actually a hole-shaped key." Both are true -- but what does it prove?
Well, not everyone fills their hole with god. People fill the holes in their lives with all kinds of stuff – some benign (like philanthropy), others toxic (like alcohol or gambling). Few things are quite as malleable as the concept of God, though some may be quite plastic. Also very few people go around advocating explicitly that society would be better off if everyone realized that sex, drugs, and shopping were the answer. So religion is a bit different in those ways. But my point should be valid for any thing you want to fill a hole with:
If you are going to claim there is an X-shaped hole somewhere, you and your audience should have a shared understanding of what the shape of X is. And if you keep changing the shape of X to fit new holes, other holes, and specific nooks and crannies, then it no longer makes sense to describe the shape of the hole in terms of X, but rather point out that X can seemingly be shaped to fit a multitude of different holes.
The issue, then, is that people who think there’s a God-shaped hole somewhere typically think that God is a universal solution, so whatever shape the hole was, they would say it was God-shaped.
And keys and keyholes are not typically plastic, so won’t work at all in this context.
Some fair points, but not *entirely* fair, I think; at least a great many of the people using such language do not *at all* have an infinitely malleable view of God, and are making the claim about a God about whom they also make a rather long list of fairly specific theological claims.
So, I think the key analogy is relevant after all; and, in particular, relevant to show that the mere fact that one can turn around the hole/hole-filler statement says nothing at all about plasticity.
The specific claim usually being made is that God (and only God) can fill the hole in a way that *none of those other things* (sex, music, etc.) *can*. Moreover, the concept of God is somewhat rigid. The claim might be false. Or it might be true. But I don't think it's vacuous in the way you seem to be suggesting.
> the mere fact that one can turn around the hole/hole-filler statement says nothing at all about plasticity.
I agree. The plasticity claim is slightly different, but related. The phrase “hole-shaped god” is meant to suggest that the gods came into existence after the hole, and in response to it. As such, maybe the key analogy works after all, in that keys are often made to fit locks, not the other way around.
> many of the people using such language do not *at all* have an infinitely malleable view of God
On the individual level, I will agree with you that probably no one has an “infinitely” malleable view of their god. It’s just a little squishy and poorly defined around the edges. If you zoom out, however, people in the same community – even the same congregation – will differ about the exact “shape of god”, and most non-zealots’ idea of god is soft enough to accommodate the differences they have to deal with daily. But the further out you zoom, the difference in shape will just get larger. Once you get to a global and historical level, “god” is such a large and amorphous concept that it is nearly infinitely malleable.
> The specific claim usually being made is that God (and only God) can fill the hole in a way that *none of those other things* (sex, music, etc.) *can*.
Yes, I agree that that is often the claim from many religious people. And in the absence of perfect mind-reading, it seems unfalsifiable, even if everyone converted today.
If god doesn’t fill the hole for someone, true believers can always say that those people don’t believe enough or observe correctly. That they just have to try harder.
If something else fills the hole, they can say people are just deceiving themselves or are being led astray by false beliefs.
And if religion does fill the hole, believers can take credit, even if the particular shape is different, because one’s relationship to god is personal.
Most have a particular god in mind, but some will make the claim that religion, not a particular god, fills the hole. And that that is a good thing. I can understand the point, but find it very cynical to suggest that people are better off believing things that are false, than to (learn to) live with uncertainty.
If you believe in a god that fills a hole in you, however, I don’t think this will convince you. Nor am I sure I would want it to. This is not the point on which to change one’s faith.
"But the further out you zoom, the difference in shape will just get larger. Once you get to a global and historical level, “god” is such a large and amorphous concept that it is nearly infinitely malleable."
I would not really agree within the context of (say) western civilization in the past 1500 years. There are of course many sects, accounting for relatively small numbers of people, where that is true; but mainstream orthodox Christianity, Judaism, and Islam have relatively well-defined views of God that change relatively little even across denomination (and even all three agree on many things).
"If god doesn’t fill the hole for someone, true believers can always say that those people don’t believe enough or observe correctly. That they just have to try harder."
I agree that the claim, if made as an evidentiary point, is subject to those weaknesses. I've more often encountered it in a rhetorical or personal frame (somebody recounting their story, etc., or using that story to try to convince others to convert). I think it has strength in those contexts. I also think that it has *some* strength in an evidentiary way, if handled rightly, given the large number of people for whom it is true -- and, again, the fact that it is *God, specifically*, and not those other things, that has found to be hole-filling. (Your points about the ill-definedness of God would, again, apply just as well, and in fact much better, to many of those other things, and yet there are relatively few people making those claims for those things.)
"I can understand the point, but find it very cynical to suggest that people are better off believing things that are false, than to (learn to) live with uncertainty."
On this we agree. Hole-filling by a false god is not to be desired.
I think I have expressed what I wanted to and will not necessarily reply again unless you raise new points, but of please feel free to reply again to anything I said, and in any case thank you for the discussion.
Well, there is a well known theory that religion was very important in the development of complex societies:
https://link.springer.com/article/10.1007/s12110-005-1017-0
https://www.tandfonline.com/doi/abs/10.1558/poth.2004.5.2.159
(both are behind paywall, but can be downloaded from researchgate.)
Although it is recently questioned
https://www.nature.com/articles/s41586-019-1043-4
(I noticed it was retracted, but then resubmitted, and I havent read it anyway, this is just to show that the god os societal regulator is controversial). The point is, if religion is really important for complex societies, then perhaps those who claimed that atheism will lead to societal decline and ills were not completely unjustified. This is not to say that religion is really necessary, but perhaps we need some replacement instead of going world-view neutral.
Yeah there’s a mutual comprehensibility thing that’s discounted here. Like anything outside of my one specific culture I grew up on always throws me for a loop.
For instance paying other men to fix your house is very haram and something akin to cuckoldry and I know it’s just a totally normal thing that people do and I have no justification for the feeling whatsoever.
Yeah when we had larger widespread religious experiences it kind of syncs everyone up. Wokism is a large sort of moral approach but that stuff seems bad to me too.
If only people could abide by the One True Way which is just coincidentally the one that I happened to be raised in.
So only pay women to fix my house, got it.
You know I’m embarrassed to say I never considered this loop hole and while I have to admit that I don’t have the same level of revulsion to it would feel like I was cheating on my wife somehow.
I am aware that makes no sense.
This is very interesting. When my dad’s new immigrant neighbor was engaging in some dangerous DIY activities we tried to stop him but it was almost as if he’d rather risk killing himself than ask for help. A friend of mine said “Oh yeah, this is a cultural thing and not just in my culture. Everyone I know with an immigrant dad has had to talk him out of fixing things when he had no idea what he was doing.” I thought it was about saving money but didn’t realize there might be more to it.
It’s a pride thing I think. I want to be able to know I am equal to my home and my family’s shelter comes from me.
In his book *Big Gods*, Ara Norenzayan makes a pretty good case for religions predicated on watchful gods concerned with human morality as necessary for large-scale civilization, and invocations of those gods' watchfulness keeping humans a bit more honest, allowing higher trust modes of organization - Norenzayan uses the example of an international set of Muslim banking organizations.
A key part of these religions are difficult to fake, public signals of fealty - the daily prayers of Islam being a great example
In that kind of light, even if people are fundamentally kind, if Norenzayan's thesis is true, a decline in religiousness and a decline in morality more broadly would more or less be aspects of the same thing.
“We” have not moved beyond this.
Nothing in Social Science makes sense except when you understand that (especially in the Anglo-world) Social Science after WW2 became the self-appointed religious police of society. They were to be the guardians of morality and thought (and to punish those who stepped out of line) so that something like “it” would never happen again.
They are still at this, still playing that same role. Mostly in obvious fashions, but of course the part their intellectuals enjoy the most is finding excuses to paper over any inconvenient incompatibilities between either different religious texts, or religious text and reality, ala Summa Theologica.
Point is, the priests of 0CE, of 1000CE, and of 2000CE both (mostly) have no idea what they are actually doing; but they are doing the same thing. But you won’t get this if you think that the social role of priest in 2000CE is played but the guy who graduated from Fuller Seminary, rather than the girl who graduated from Harvard Social Studies.
It remains an open question as to whether these 2000CE priests are *also* capable of creating durable social value. Like their 1000CE Christian equivalents? Or like their equivalents in say meso-America or Carthage? I’m skeptical of value creation, but don’t have time here to push this further.
Lee Kuan Yew (I think? - cant find it now) told a story about visiting London after the war and finding a newspaper stand in a train station with no one attending it. The way it worked was you took a paper from the pile and put your money in the pot, and if there was theft of the money sometimes it was low enough that this worked out as a way to run a business.
The idea of trying that today is ludicrous. Since I first heard it I’ve always thought of this story as an kind of existence proof for the idea that _something_ has changed.
Farm stands in the country operate this way. I take fresh veggies and leave money in a tin.
My kids’ school made a big deal at orientation that while you *can* put a lock on your locker, no one does. And they’ve never had to investigate a theft. The students literally told the principal when she first started “that isn’t how we do things here.”
Also most of my neighbors don’t lock their doors.
Some of them get robbed though. One of our favorite farm stands now has a bank grade money box with a one way chute. A local chicken farmer used to notice a dozen eggs getting stolen here and there towards the end of the month, but continued with the honor system until someone stole all the eggs and the cash box.
For what it's worth these are still quite common where I live (Vienna, Austria), where you find newspapers in bags attached to street signs with a similar "honesty pot" above them.
That said, I've always assumed that this works because newspapers make money from adverts, not only sales, and any copy that goes missing is a relatively small capital loss but still counts as a copy "in circulation" for advertising purposes.
The Metro, a free newspaper that makes its money from advertising, is ubiquitous on the tube.
There's also the "nobody locked the front door back in my day" omnipresent anecdote.
I think, even if lax security does imply moral fibre (and it does seem intuitively likely), using that argument can lead to poor security. Things should be secure just in case, even if an attack is unlikely currently.
I'm talking about security in general, not just home security. Particularly credit cards (see demost_'s comment).
Security has costs. Locking and unlocking a door takes a little time repeatedly. Losing a key or other similar failures takes more time and it's unpredictable.
People in cold climates seem less likely to lock their doors, presumably because they don't want to leave their neighbors outside to freeze.
When I lived in Newark, Delaware (a medium-sized college town), there was a while when I didn't lock my door. The weird thing is that if I mentioned it, people would get angry with me.
My theory is that they didn't want to hear me complaining if I got burglarized, but this is only a guess.
When Denny Hastert became Speaker of the House around 1998, and thus third in line for the Presidency, he then had to keep nuclear war secrets in his house. The security services in charge of these documents requested copies of his house keys (he lived 50 miles outside of Chicago). It turned out he didn't have any keys because he didn't have any locks on his doors.
Then he turned out to be a gay child molester.
But nobody remembers that because he was so boring.
The past is really complicated, so it's not surprising that survey questions about it aren't very reliable in what people respond.
I mean my grandparents never locked their doors, my mother and I certainly do. It wasn't jsut an anecdote, I lived at their house often.
They did once have a break in into their locked garage, which is kind of ironic. The person stole 1 Kenny Rogers cassette.
By "anecdote" I don't mean to imply that it's untrue, just that it's a sort of thing that's hard to quantify and make a pretty graph in service of some abstract point.
Well sure but often things are "common anecdotes", because they are true. Sometimes not, but generally.
I have a friend who lives in a big coastal US city who sleeps in the carriage house out back but has the kitchen in the main house. They have ended up leaving their house unlocked all the time because it’s more convenient than constantly unlocking and locking. They’re in a very walkable neighborhood, easy biking distance from downtown.
I suspect it would actually be totally fine for most people to leave their door unlocked.
I'm pretty good at remembering/estimating dates of history I lived through. When I started reading grown-up magazines around 1967, there was a public service campaign to tell people that due to the recent rise in auto thefts, they shouldn't leave their car keys in the ignition any more. After that came reminders to lock your car. My father, a good neighbor, made it a practice when walking down the street that if he saw a parked car with its lights on, he'd open it and turn off the lights. But over time in the first half of the 1970s, he wasn't able to do that anymore because the vast fraction of cars came to be locked. I think the last time he was able to turn off lights was around 1972-1974.
So, yes, there was less car crime before the Late Sixties. On the other hand, I suspect breaking car windows peaked around, say, the 1980s. But car theft has come roaring back in this decade. So, it's all very complicated. Unless you follow crime statistics closely like I do, these various trends will be a blur, so you will just pick one simplification to answer the survey question.
When I learned as a young man how credit cards work, I was shocked. I am still shocked until today. You just have a number (or two), and everyone who knows this number can pay from your account. And you give this number freely away at 100s of restaurants and other places (nowadays to online shops), to complete strangers.
I find this pretty much as ludicrous as the newspaper stand. Actually, more ludicrous.
Related to credit cards but not terribly related the the discussion on morality ...
In the US very early on as credit cards were being rolled out the US congress passed a law that, effectively, made it such that the merchants and banks absorbed most or all of credit card fraud. Assuming that the credit card holder actually looked at their bill and informed the credit card company of the fraudulent charges then (and this was pre-WWW so you had to wait for your paper bill to arrive for the month).
The credit card companies complained A LOT about this, but it turned out that this removed a lot of customer hesitation about owning a credit card. Which led to huge growth in the credit card industry. Which eventually led to huge profits for the banks even after the fraud losses [things are more competitive now, but there was a time when banks put their promising young executives in the credit card divisions to fast track their careers ...].
And it mostly worked fine for decades (though, obviously, not without some fraud).
Over time and as things get more automated and anonymous the credit card folks are having to up their security game. But it did work as you described for a long time, partially because of where the loss was placed.
That's interesting, thanks!
>The credit card companies complained A LOT about this, but it turned out that this removed a lot of customer hesitation about owning a credit card. Which led to huge growth in the credit card industry. Which eventually led to huge profits for the banks even after the fraud losses
This sort of thing is why I'm skeptical of anti-regulationism. Here you have the ideal situation in the prisoner's dilemma, a reliable third party enforcing "cooperate", and short-sighted profit-seeking entities still overwhelmingly try to defect. They choose short-term profit over actions that will actually make their product more valuable to the customer (and more profitable to them).
Randian objectivism doesn't work, selfishness is always going to fall to Moloch.
Hmm, it's not really an ideal prisoner's dilemma because the credit card companies had no idea that taking on all the fraud losses would be profitable in the long run. An ideal prisoner's dilemma has certainty that picking the "cooperate" option will lead to both party's benefiting.
You'd have to find non-defectbot modern politicians, though. Who had their own people's best interests in mind. In the West.
In one of Bruce Schneier's books he talks about how the UK didn't get that law, and the credit card companies mostly put the liability on customers by arguing that if someone stole your credit card number that must mean you were careless, because how else could anyone possibly get it.
And as a result the UK had far more fraud, because customers had much less ability to prevent fraud than the credit card companies did, and the companies didn't try very hard when they weren't the ones facing liability.
I find they usually do 2FA nowadays. They'd have to tap my sms messages.
That usually costs about $10, though it's cheaper in quantity if you get a plan. Ars Technica had an article on this.
To be fair it does get stolen pretty regularly. I bet I average one unauthorized charge and card replacement a year (I travel a lot for work).
"Oh yeah someone at that restaurant was stealing cards" is something I have heard more than once from my bank.
But yeah it is crazy how well the system works.
Even more ludicrous is how checks work. Your supposedly secret bank account number, which you should never tell anyone, because they could withdraw money from your account, is written on the front of every check.
You have to verify your identity to accept credit payments.
In Canada, although less common than it used to be (and I don't have numbers on this), many people still:
1) Don't regularly lock their doors unless they're going on holiday or something
2) Left money on the table at a restaurant once they received the bill and just left.
I now live in the UK with my wife (who is British), and she thought both of those were crazy.
It's probably worth noting that I was growing up during a historical *peak* of crime, and those behaviours were extra common then.
Safe to say it's a bit different in the UK (although I do still see honesty boxes for farm veg out in the countryside).
1 is pretty common in the right kind of small town in the USA, even today.
2 is pretty standard behavior everywhere I’ve ever been.
I've never seen 2 outside of the US or Canada - have you seen it elsewhere?
I can for sure say I’ve seen people leave money on a table or bar in France, various states in the Caribbean, and places in Central America. Can’t recall for some other places in Europe or SE Asia, but I’d be really surprised if it was unheard of.
1. Depends very much on where you are, and also where you grew up. I grew up in Toronto and I always lock the door. A friend grew up in a town in New Brunswick and he never locked his door even when he moved to Toronto. I had a hard time believing he never got robbed.
2. When we were still using cash (before the late unpleasantness) I would do this, but I did make sure the waiter knew we were leaving - making eye contact etc.
I grew up in New Brunswick!
If 99% of people lock their doors, it probably doesn't pay to be a burglar who goes around checking to see if doors are unlocked.
Similarly, if a lot of homeowners are armed, being a home invader probably isn't a good career path. So, homeowners who don't own guns benefit from other people who do.
Ive lived all over the US and neither of those strike me as crazy. Not locking doors is more of a personal affectation (I just think the risk is worth the convenience, I don't imagine many thieves walking around and just testing doors) but everyone leaves cash tips unattended, for example
Notably, (2) is no risk to the person doing it - theft hurts the restaurant, not them.
This is still the case in some places in Austria and Switzerland
Many restaurants these days leave food or drink sitting out on a shelf where anybody can walk in off the street and access it. Amazingly, they expect that only people who have already sent them money will take food, and that those people will voluntarily limit what they take in relation to what they paid!
Even more amazing is that this actually seems to work, at least where I live (dense urban environment) I never hear complaints about theft and pretty much all the restaurants that take online pick-up orders do this. Presumably some would stop if theft was a problem. Other types of apparently riskier theft do happen (e.g. pick-pocketing).
Modern newspaper vending boxes aren't much more secure. Sure you are forced to put money in to open it, but once you do you have access to all the copies of the paper and are trusted to take only one.
I think the crazy thing about the honesty tins is more that someone could raid the tin than that someone could take the papers without paying - that does depend on the design of the tin though
In this case there is very little motivation for anyone to take more than one. What are you going to do with extra copies - sell them at a discount?
Something I've seen homeless people do from time to time. Or just give them away on the hope of a "donation" in return. Also seen alocal theater company place advertisements for their upcoming show in each copy of the newspaper instead of paying for advertising space. That was kind of weird.
Other times I'll see people jimmy it open or take all the papers out and leave them on top so future people don't have to pay. Seen people take a stack of papers home with them to wrap glass for moving.
None of these are very common, and clearly none of these is enough of a negative impact to be worth improving security over.
I recently read an anecdote about a guy selling sports cards on ebay and while most paid a few 'didn't receive their purchase' and he was forced to refund.
Point is it worked out for him overall. Not the same as an "honesty pot" but a modern similarity.
I've known honest eBay sellers (antiques, vintage and specialty electronic equipment) who had to get out of the game entirely due to buyer fraud: "arrived broken", "not as described", free return of substitute goods, etc. As a response to fraudulent sellers, and to promote buyer confidence, eBay now sides so firmly with the buyer that sellers are left in a very vulnerable position.
As I recall, some big companies have less generous return policies than they used to because of buyer fraud.
And non-buyer fraud-- people getting LL Bean products out of the trash at college dorms and sending them back for refunds.
I can see how high value, general interest items would quickly become unprofitable for sale on ebay. Seems like specific collector and/or items of niche interest would have better luck by reasoning(in-group mentality) and my one anecdote.
[Philosophical tangent to follow]
It seems like the human tendency to group and create urban areas that offer more opportunities (economic, dating, creative, etc.) also tend to break up communities which by definition are people who know each other and interdependent socially as well as economically. This leads to moral decline(steering back on topic) and collapse, in cycle.
https://drive.google.com/file/d/1btPKl8ynTBr32c17VfKt2mcgD_I97nUg/view
This still happens today. I went camping in the US literally this weekend, and there was no attendant at the entrance. To get a campsite I just filled out a paper form that was kept in a zip lock bag on a table, and put my money in an envelope.
Not really related, but my father mentioned that his least favorite part of his newspaper route (40s-into-50s, thrown from a bike; eventually when he had a pretty big route, he went to a bank and got a $300 loan for a car, at age fourteen ;-)) was having to go by and ask people for their money. He even on his own dime bought stamps and addressed envelopes hoping they would just mail it to him. (He had to pay for the papers, you see.) One of those lessons only a job teaches perhaps. Don't be owing money - pay what you owe quickly and don't make people ask you twice or 3x. Certainly he is this way. I wouldn't call him a moral man though.
(a) In London right after both WWI and WWII they had huge labor shortages because so many men had not come back from the war. So during those few-year periods they had no choice but to try all sorts of no-labor-needed arrangements as temporary expedients.
(b) Where I live, in a large U.S. city, that sort of practice (if not the exact example cause there are no newspaper stands anymore) is commonplace. E.g. I paid for my lunch just now by leaving a couple of bills on the lunchroom counter and walking out. Etc.
Does anyone know if this has been tested recently? My intuition is that this probably depends a lot on where you live, but in most places in most parts of the world, you could still have a vending stand like that (maybe not newspaper though, since nobody buys newspapers anymore). But this is an easily testable claim.
Singapore especially!
I’ve been told by friends who live in Singapore that it is common practice to claim your table at a food court by leaving your phone on the table while you’re at the stalls ordering food.
I am reminded of this from Mark Twain: "When I was a boy of fourteen, my father was so ignorant I could hardly stand to have the old man around. But when I got to be twenty-one, I was astonished at how much he had learned in seven years."
Our values & ideas always go from indistinct and adaptable when we are young to concrete and rigid as we grow. Does not matter whether it is moral values or technological concepts or intelligence etc., I find we flip from being accepting of other views when we are young to judging other views against our own as we grow older.
There were times I used to wonder "Is it me or the world around me which is changing?" But then I realised, it is both, "I" and "the world" around me mutates, it is just that I mutated with the world when I was young and hence did not see the difference, but as I grew old I fell out of sync with the mutation which caused me to see the change & judge it.
So, irrespective of what the actual is, we always think that things in our childhood were much better and has degraded over time.
The other effect is that when you are young, and hear about <this wonderful new thing> you are hopeful and credulous enough to believe that this really is the way of the future, and well worth the investment of your time in learning. Having been around a while, you discover that a great many wonderful new things are merely the latest fashion. In a decade they will be forgotten -- or it will be clear that this is more than a fad, and you can start your learning then.
Definitely I think as you get older, you look back at childhood and early youth with rose-tinted glasses. People were friendlier, the world was nicer, things were just better.
Which is why *of course* asking people 'do you think morals/ the taste of tomatoes has declined since you were young?' is so subjective and going to get "between when I was born and now, things have definitely gone downhill".
Because the world *does* change, and we get a lot of experiences bad as well as good, and things seemed to be simpler when we were kids because we had much less to worry about in a limited sense ('is Pete my friend, does he like me or have it in for me?' doesn't have as much heft as 'Pete is my boss, does he like me or have it in for me?' when it comes to messing up your life).
I think that, in general, people are also nicer to children. Most people have much higher moral standards for interacting with children, so it might be that people really are more moral (to us) when we're young and have the protection of youth.
Good point!
Bullshit, people aren't nicer to children. In fact, adults treat children much, much worse than they do eachother. If an adult tried to forcibly imprison someone else for years (even outside of school they forced activities and completely control the schedule; I wished I could have played soccer with the other kids but nobody was allowed to leave their own backyard except for a very rare playdate) then violent self-defense would be considered justified. The adult legal system doesn't tell victims of assault that they just need to interact more with their abuser and force it to happen more. And a lot more stuff like this.
On top of it all, the US legal system forcibly returns children who run away from home unless they experienced a short list of specific abuses. I'm not sure what the situation is like in other countries, but adults get nothing like this.
Reminds me of a passage from G. K. Chesterton:
"When the business man rebukes the idealism of his office-boy, it is commonly in some such speech as this: "Ah, yes, when one is young, one has these ideals in the abstract and these castles in the air; but in middle age they all break up like clouds, and one comes down to a belief in practical politics, to using the machinery one has and getting on with the world as it is." Thus, at least, venerable and philanthropic old men now in their honoured graves used to talk to me when I was a boy. But since then I have grown up and have discovered that these philanthropic old men were telling lies. What has really happened is exactly the opposite of what they said would happen. They said that I should lose my ideals and begin to believe in the methods of practical politicians. Now, I have not lost my ideals in the least; my faith in fundamentals is exactly what it always was. What I have lost is my old childlike faith in practical politics. I am still as much concerned as ever about the Battle of Armageddon; but I am not so much concerned about the General Election. As a babe I leapt up on my mother's knee at the mere mention of it. No; the vision is always solid and reliable. The vision is always a fact. It is the reality that is often a fraud. As much as I ever did, more than I ever did, I believe in Liberalism. But there was a rosy time of innocence when I believed in Liberals."
This reminds me of the saying, "Women marry men expecting them to change. Men marry women expecting them not to." In reality, both are right, and both are wrong, since in some ways they change and in others they stay the same. Perception is sometimes a strange thing.
THAT is funny-true!
Nice post. By the way, violent crime reported is more than double since 1960, but actual murders are up only 20%. That difference may be revealing.
Measures of "violent crime" can pick up changes in whether people feel it's worthwhile to report crime to police, in which case more reports can sometimes match to *less* actual crime.
For this reason, historians prefer to track homicides over time, as murder is almost always reported.
When I see murders up 1.2x, and a 2.5x increase of "violent crimes," one immediate hypothesis is "actually, there aren't 2.5x more violent crimes, people these days just report more of the crime that happens to police."
An exact opposite hypothesis would be "the increase in violent crime is real, and there would be a lot more murders, too, except that modern medicine keeps the assault victims alive and it's not murder if you don't die."
A third sideways hypothesis would be "there aren't even more attempted murders, just more handguns to attack with instead of knives, so the same level of violence as in 1960 is getting 20% more people actually killed."
Which is right? I don't know. But I do know the chart can't tell you!
So "beware the man of one chart" just like you would "beware the man of one study."
Confounder: it's way harder to do anything with a stolen car today. Parts way easier to track, etc. I'll bet if you steal a Tesla, you will find it extremely difficult to get even a tiny sum for it.
Criminals (for the most part) don't steal cars anymore, they just break windows and take whatever valuables are there that can be taken.
Car theft in most American cities has skyrocketed in the past few years. It's doubled in the past year in the city where I live (Memphis). It's mostly Kias and Hyundais being targeted, because they didn't have immobilizers prior to 2021, and there were social media instructional videos on how to steal them.
My sense (in my area) is that in most of those thefts the car gets joy-ridden, partied in, then used for a robbery getaway and abandoned. I don't *think* there's a big market for hot Kia Souls or parts.
A constant lament on social media locally is people's tools being stolen out of their trucks, even when they have a cover or box.
Down here they steal catalytic converters a lot - there's a place on the interstate with a big sign "We Buy Catalytic Converters!" which almost seems like it might not be legit. And then sometimes people come out their houses in the morning and find their truck up on blocks, the wheels gone.
A lot of times the stolen cars in this area seem to be perhaps something like a gang initiation? For which I suppose we're supposed to be grateful that it was so mild. The vehicle will be found in the immigrant area that is a sort of Bermuda Triangle as far as solving crime goes, evidence of some sort of party like Flamin' Hot cheetos all over the carpet. The cops do their job if they get the vehicle returned to you. There is maybe not much effort to figure out who stole the car, when it's not part of an actual theft ring.
But car thefts are way up in this decade.
A general problem about survey questions about "the past" is that there are a lot of different alternatives for what is "the past." For example, homicides, auto thefts, and traffic fatalities in the 2020s are way up over the 2010s. On the other hand, lately Biden supporters have taken to arguing that early 2023 appears to have been less chaotic than 2021, so what are you worrying about?
Similarly, combat deaths in Eastern Europe are way down compared to the Battle of Stalingrad.
Murder is usually considered to have increased less than expected because medical treatment has improved and many would-be murders are now just attempted murders.
Not because of the three strikes stuff?
Also, intra-prison crime apparently doesn't count.
I agree that prisoner-on-prisoner crime isn't getting counted, and neither is guard on prisoner. I'm guessing that if a prisoner attacks a guard, that *might* be counted.
Nice catch: I missed that sub-category altogether. Thanks!
So, while this is obviously true, I've now seen this justification used enough to raise the question: Aren't attempted murders *also* pretty consistently reported? Shouldn't we be able to correct for that effect? Has anyone attempted to do so?
I'm not sure. My guess would be that if X assaults Y, it's hard to tell if it was an "attempted murder" or just a regular assault, so the natural category is overall violent crime statistics.
Perhaps there's something interesting to do with trying to establish base rates of deaths from assaults across different contexts where homicide isn't a plausible motive in some contexts.
What about percentages? If the population doubles and absolute quantity of murders only goes up 80%, isn't that an improvement?
Improvement in ERs, belike.
"...murders only goes up 80%, isn't that an improvement?"
An improvement in societal morality OR policing(or other environmental variable)?
Why be exclusive?
But who can dispute that fewer murders per capita is better than a stable murder per capita rate?
An evidenced improvement in society as the result of environmental variables as opposed to the broad acceptance of a population that ending another's life for one's material or emotional gain is morally wrong, is more dependable as it generates a lasting 'vibe.' If, for instance, stricter policing is the result, this can change with a new mayor in a matter of weeks.
Forgive me if I'm misunderstanding, but are you saying fewer murders per capita is worse in the opinion of people who think having lots of people is making the world worse?
Sorry. To clarify:
I'm simply saying that an improvement in the murder rate is better as the result of some sustaining moral improvement such as examples of leadership that inspire personal change(1) in a population than an unchosen/forced external factor as stricter rules/policing which improves behavior while in effect.
Apologies if I did not address your original point.
(1)"The history of Europe during the later Middle Ages and Renaissance is largely a history of the social confusions that arises when large numbers of those who should be seers abandon spiritual authority in favour of money and political power. And contemporary history is the hideous record of what happens when political bosses, businessmen or class-conscious proletarians assume the Brahman’s function of formulating a philosophy of life; when usurers dictate policy and debate the issues of war and peace; and when the warrior’s caste duty is imposed on all and sundry, regardless of psycho-physical make-up and vocation.”
--Aldous Huxley, The Perennial Philosophy, 1945
Scott already has the modern medicine issue as item #1 in his Mistakes post.
(Murders are otherwise a better indication than violent crime because murder is hard to underreport.)
This is why I much prefer the National Victimization Survey for assessing actual changes in U.S. crime rates.
Yikes the discussion section in the nature paper is chilling. What I do see is editorial decline in our science journals.
I'm too eager to get this comment out to really check thoroughly, first... (edit: well that bit me — dunno how many damn words were autocorrected to the wrong goddamn form, initially...)
...but it looks to me like you're right about the statistical methods used. That's not how I'd do it, I think.
The "Bayesian" method they use is interesting, but as the documentation on it (that is, on the "Bayesian RoPE" method using the Highest [Probability] Density Interval™) takes ESPECIAL CARE! to caution the would-be user that defining the ROPE ("region of practical equivalence" — the range wherein the parameter-of-interest's values are "negligible" / "of negligible magnitude") offers an unfortunate latitude to the researcher.
That is: both the range itself *and* the choice of units are left to the user's judgment, with no clear Best And Objective Way to use as a bright-line distinction — and the choice of units can determine whether the *same exact data* is counted as confirming the null or no.
Man, this is just *made* for Garden-of-Forking-Paths–ing (GoFPing?) until you've got a satisfying conclusion!
That's in theory, anyway. In this particular case, maybe they actually did pick common-sensical values for the "negligible effect" range (RoPE)
Give them enough RoPE, and they'll hang themselves?
Agreed in the sense that Scott's quote of "ROPE" leaves out the most important part of it -- what was the researchers' definition for region of practical equivalence? Ctrl-F:ing the article, they define it in the analysis section, as "±0.1 standard deviations". That is, x % HDI within ROPE means that x% of the highest density interval of the Bayesian posterior estimate was within 0.1 standard deviations of ... ahem, I am not actually sure of what? I wish authors would explain what they did in detail, at least in supplements or something.
Also, while we are talking about statistics, for question 78 in Table S3, it is not the r^2 that is -0.006 but b (unstandardized regression coefficient, probably?) . r^2 is 0.008.
I want to agree, but I really don’t think amoral rationality has moved the needle enough to justify my bias.
>(think I’m strawmanning? Read the last paragraph of the Discussion section)
My god, you're not kidding. Quoting it for anyone else:
>>The United States faces many well-documented problems, from climate change and terrorism to racial injustice and economic inequality—and yet, most US Americans believe their government should devote scarce resources to reversing an imaginary trend.
...Also, I'm surprised you didn't link this back to "Social Psychology is a Flamethrower"; this is clearly some choice napalm.
Wow, I was only a little surprised to hear that the Discussion was like that, I'm a fair bit more surprised to see that it was totally like that without the slightest reservation
Yeah my respect for Nature certainly took a big hit. Too much time is spent telling each other how terrible the other side is. And now it has spread to science journals. (Scientific American, followed a similar path. But ~30 years ago.)
Coming right on the back of their editorial blasting people for caring about AI x risk while making no attempt to refute AI x risk. It's ideologically captured by the Successor Ideology.
Yeah this has made me lose a last little bit of respect for Nature that I didn't know I still had.
That's not even the worse sentence imo. Also from the discussion section. " If low morality is a cause for concern, then declining morality may be a veritable call to arms, and leaders who promise to halt that illusory slide—to “make America great again”, as it were—may have outsized appeal." Completely broadcasting their disdain for Trump. I don't like him either, but I wouldn't mention that in a scientific paper. And people wonder why Trumpers don't trust scientists.
Huh, didn't notice that. I mean, it's reasonably-accurate as a description of Trumpists (I think most of them would agree in principle that declining morality *is* part of their motivation to vote for Trump, it's just that they don't think it's "illusory") but yeah, pretty explicit.
It's abundantly clear to those who are not liberal elites that their opinions and beliefs are not valued by those that are. It goes back long before Hillary Clinton called them a "basket of deplorables" or Obama disparaged how "They get bitter, they cling to guns or religion or antipathy to people who aren't like them or anti-immigrant sentiment or anti-trade sentiment as a way to explain their frustrations."
Um....what? Scott claims that this paragraph says:
1. conservatives are wrong
2. conservatives' fears are fake
3. we should refocus conservatives fears onto problems liberals care about
But the paragraph
1/2. Does not single out conservatives as affected by this bias, let alone say "conservatives are wrong" or "conservatives' fears are fake" in general
3. Does not exclusively mention left-coded problems, as terrorism is traditionally right-coded. I guess if you interpret it as referring to the increase in far-right terrorism, then okay I see what you mean, but I wouldn't be a fan of terrorism even if "my tribe" were doing it. In fact I'm pretty sure I would by bothered by "my tribe" killing random people. And it doesn't feel that long ago that I was hearing about the evils of ISIS all the time (especially from the right). And I agreed, ISIS were nasty bastards that we should Do Something About, though by today's standards it seems odd that it was right-coded. This article is from 2018, does it not overlap that period?
And yeah it also says '“make America great again”, as it were' but is that not the kind of response you would expect from people who perceive moral decay? Like, if someone has a bug up their ass about "the moral decay of America" and therefore wants to “make America great again”, does it not make sense to regard this as a mistake if in fact there is no moral decay?
Maybe Scott's criticism would've rung truer if he put it at the bottom after his main criticisms, rather than at the top where I'd fact-check before reading on.
>1/2. Does not single out conservatives as affected by this bias, let alone say "conservatives are wrong" or "conservatives' fears are fake" in general
Worries about moral decline are almost the defining conservative issue. Certainly, I'd say it's the biggest conservative issue these days.
>3. Does not exclusively mention left-coded problems, as terrorism is traditionally right-coded.
I noticed that too, but...
>This article is from 2018, does it not overlap that period?
>>About this article
>>Received 11 July 2022
>>Accepted 26 April 2023
>>Published 07 June 2023
...so it is indeed post-Jan-6. I'd say that one's close to neutral, and I'm not even sure which way it's leaning.
>Like, if someone has a bug up their ass about "the moral decay of America" and therefore wants to “make America great again”, does it not make sense to regard this as a mistake if in fact there is no moral decay?
Like I said above, I indeed think that logic is sound. However, I'm suspicious (and certainly the Trumpists would be suspicious) that this Nature article is not in fact following that logic but exploiting it, starting from the conclusion that people shouldn't vote MAGA and manufacturing evidence that moral decay is fake in order to convince people not to.
(I'm not quite making the accusation of the experimenters lying, more a case of "experimenter effect" plus publication bias on Nature's part.)
Oof, thanks for the correction. I don't know why I thought it was from 2018.
If you think of morality as a magnitude (which I think is the right way of thinking about it) then any move away from what you think of is right is a decline. You can't super not-cheat-on-your-wife. So its a bit of a silly question. If morality hasn't declined then it hasn't changed. But it clearly has changed.
I don't think this is true as to society as a whole, which could shift from "lots of people cheat on their spouses" to "few people cheat on their spouses."
To biological creatures, evolutionary pressures are God. Therefore, since evolutionary pressures are inconsistent across place and time, God is always a creature of the moment, enforcing his will most brutally on yesterday's favorite children. The fundamental act of morality is patricide, our parents, demons.
Evolutionary pressures certainly exist, but they aren't God. Evolution functions more in a format of "spray everything at the wall and see what sticks" than "this has a chance to improve survival so let's try it out". Survival of the fittest only works sometimes.
Like a lot of my comments this one was sort of meta satirical, which may not even be a thing, so apologies for any confusion. Unpacking the layers I am basically just taking a common authoritarian "Red in tooth and nail" social Darwinism and making a slight change in reasoning to say we should use that reasoning to kill such people themselves. It's a weird joke that's packed in pretty tightly and that nobody ever understands so I dunno why I keep making it.
I find it quite funny but you may want to work on subtly signaling that it’s humor because otherwise you just come across as a garden-variety internet maniac.
Insofar as the direction of travel for morality is driven by young people, an aging society would feel increasingly alienated from the moral trajectory simply on account of ratios of young to old.
This is a great point. I have thought of this before but never quite crystalized it fully. The slow aging of the population is going to create more uncomfortableness with the "normal" rate of change. And I think many suspect the rate of change is above normal right now.
Expanding on what you briefly mentioned about wealth, I think you could divide morality into roughly two categories:
A) "be a good Roman" morality- meaning acting based on feelings of duty/ honor to do hard/ badass/ valuable things that benefit others/ make you look cool. This is more Nietszchean.
B) "don't be a nazi" morality- meaning being careful to not accidentally act on evil morals and serve some super villain dutifully in a way you think is righteous but actually is totally not. This is more like Socrates or something.
Morals in society during times of peace/ prosperity will generally be drifting in the "don't be a Nazi" direction for morality. Judge not that ye be not judged, stop to think before marching off to war, etc. I'd argue we're a lot better at "don't be a Nazi" morality in 2023 than we were in 1949, in spite of all the complaints about liberals being speech Nazis/ alt right people being literal Nazis. People in general are probably much more open minded, accepting, not racist, not sexist, empathic, and humble with regards to morality than they were then.
But "don't be a Nazi" morality is negatively correlated with "be a good Roman" morality. So since 1949 we've also gotten significantly less good at what you call "1940s morality." And while 1940s morality and Livy's model for morality are much different, they both have in common that they are calling people to behave according to specific norms, calling on people to live up to a specific model for good citizenship that involves a lot more than just "being chill." During times of war or economic hardship, "be a good Roman" morality probably improves, while "don't be a Nazi" morality probably erodes.
So this seemingly constant attitude that morality is in decline could on some level simply reflect this conflict between two opposing things that we happen to call "morality." Most of the time, civilizations slowly get more prosperous, leading to things shifting in a "don't be a Nazi" direction. But we focus on the negative and see how everyone is being less good Romans- being more directionless, apathetic, etc. Then a war breaks out and patriotic fervor explodes out of nowhere, labor force participation rates soar, the martial virtues that Livy loves so much increase. But as this happens, we can't help but notice that humans are murdering eachother like animals, and all kinds of other atrocities are occurring. Clearly morality must be in decline, it's the end of the world. So perhaps the only time societies in general feel like morality is improving is right after a victorious war, when the soldiers come home and stop killing people, taking their disciplined habits and conformist haircuts with them. At these moments both type A morality and type B morality can seemingly both be at new heights at the same time. But most of the time it will seem like the world is getting either more Nazi or more apathetic.
But the 'be a good Roman' morality isn't Nietzschean--they were acting on behalf of the Roman state.
I think you've got a pretty good description of the USA in the past 80 years as you have a transition from God, Family, and Country to Never Hurt Marginalized People, but lots of other times and places returning soldiers did all kinds of awful things. Were demobilized war veterans in Weimar Germany a moral force?
I'm sure they were, by their own standards; in fact, aren't they close to the archetype of 'Roman' morality?
Thanks for your thoughts. I meant Nietszchean in the broader sense of valuing absolute standards rather than valuing relative things like mercy, kindness, sympathy, etc. But yeah, Nietszche was pretty individualistic/ pretty anti-state (at least in Thus Spake Zarathustra where he basically called states false gods and parasites) so describing this "be a good Roman" idea as Nietszchean is perhaps misleading.
Yeah, returning soldiers often do awful things. I wasn't even arguing that triumphant returning soldiers lead to both kinds of morality are actually improving (like in the obvious example of 1950s American culture) but that this triumphant return can create a general feeling of moral progress that might offset that more common attitude of moral decay. There were obviously some problems with 1950s American morality.
I think Rome could be imagined as a Nietschean entity rather than individual Romans.
The people who actually went to war against the Nazis might have their own ideas about whether they were good at not being Nazis.
Fighting against Nazis doesn't implicitly make you good at "not being a Nazi." It is easy to imagine two groups of literal Nazis with very similar values still fighting each other for dominance in accordance with their Social Darwinist value system. Anyways, I was meaning to open up a discussion contrasting a morality that focuses on martial virtues/ conformity versus a morality that focuses on cautious thoughtfulness/ mercy. I had no intention of focusing on the real Nazis and the real people who defeated them.
"Be a good Roman" and "Don't be a Nazi" are good fast ways of describing moralities.
A lot of the difference is about who you are supposed to be loyal to.
Thanks!!
"Maximus: Do you find it difficult to do your duty?
Cicero: Sometimes I do what I want to do. The rest of the time, I do what I have to.”
--Gladiator(Film), 2000
There is still, I think, a moral component to our choosing, on duty and off.
If you saw the movie, Fury, the scene at the end where the young Nazi was inspecting the tank, for instance.
"Don't be a Nazi" is ambiguous-- it's usually taken (in the west, not in Russia) to mean "don't even get near committing a holocaust", but Nazi expansionism killed a lot more people.
Being a good Roman can overlap being like a Nazi.
Except "holocaust" these days encompasses refusal to accede to pronoun demands. Nazi has long become a generic insult for a political enemy, whether in Russia or in the West.
Do you have cites? I haven't seen the meaning of holocaust drift that much.
"Transgender genocide is a term used by some scholars and activists to describe an elevated level of systematic discrimination and violence against transgender people."
https://en.wikipedia.org/wiki/Transgender_genocide
Holocaust and genocide are different words.
I also don't think most people have expanded genocide that much.
There might be another-- "Be a mensch"-- be honest and reliable and generous, though not so generous as to be self-destructive.
I don't think Socrates is actually a good representative of that morality. He fought in war rather than worrying about whether it was morally acceptable to do so, and part of why he was so controversial is that some of his students had been involved in a tyrannical takeover of Athens. It's worth reading Willmoore Kendall on how moderns use Socrates as a symbol without understanding what he actually believed and why he drank the hemlock.
https://www.unz.com/print/ModernAge-1959q1-00098/
continued at https://www.unz.com/print/ModernAge-1959q1-00109/Contents/
Sure, I'm not claiming to be any kind of expert on Socrates. I guess I mean "Socrates as represented by Plato." Sure he fought in a war when he was younger, but that wasn't unusual for Greeks during his day. The simple statement "I am wise because I know that I know nothing" is very much representative of this "don't be a Nazi" moral idea- thoughtful, introspective, etc. Plato's Cave (perhaps better called "Socrates' cave" is another obvious example- the focus being on enlightenment and understanding, not obedience). Buddhism, Jesus' teachings in the New Testament, and many other ideological systems favor this thoughtful, fuzzy, critical, enlightenment-focused approach to morality over more traditional "be a good Roman" kinds of ethical systems. And whether or not Socrates 100% fits this "don't be a Nazi" morality isn't the focus of my argument, I'm just trying to highlight a dichotomy that seems to exist. Thanks for the interesting link, cheers-
I don't think Socrates fits Buddha or Jesus either. Although Jesus was an annoying gadfly in his own time.
Plato's Cave was invented by Plato, not Socrates. And Plato's idealized Republic is totalitarian!
This is a tangent but The Republic is an introspective work more so than a political treatise. This is mentioned no less than 3 times explicitly and made reference to in more minor ways throughout, and I feel like I'm taking crazy pills whenever people discuss it.
I suspect quite a few of the differences reflect changes in the distribution, most people have got far less violent, you are far less likely to be attacked by your spouse, parent, boss, cops, teacher, customers in your local pub but 15-30 year old males linked to the drugs trade have got much violent and that explain all of the increase in violence.
Not my experience at all. I don't think random violence from family/friends/acquaintances was more (or less) common 50-100 years ago than now. Would be interested in seeing some source that you think contradicts this.
The other half of your assertion, that young criminals today are more violent than in the past does track, but also not sure if just my impression or real.
I'd be curious how spousal abuse was tracked or categorized back when it was considered more acceptable. I don't know if it's more or less common, but a man used to be able to force his wife to have sex with him and that was generally legal. Being legal, I doubt it was counted as violence in crime studies, but would definitely be on the mind of survey takers when talking about safety and violence.
There's been a lot of relatively recent pressure to not spank children.
I am not sure I follow all of the arguments in this post. Just to pick on one aspect: Do we really suspect that people have the same inclination to report violent crimes (in particular rape) today as compared to 50 years ago? Research seems to document that more rapes are being reported in recent years, compared to 50 years ago. Hence, any uncritical referral to general violent crime rates is just going to be...biased.
I think the definition of rape has been broadened in the past fifty years, too.
Given that people complaining about moral decline seems to happen in most time periods, should we think that moral progress/regress and moral drift have been constant or moving in tandem for that entire time? Or perhaps my presumption that complaints of moral decline are universal is wrong?
I'm not sure it's true that "people complaining about moral decline seems to happen in most time periods". Most examples people offer come from a few, widely-scattered time periods.
I think it goes in cycles; I think over the generations we improve on some things, decline on others, until the bad consequences pile up and then we go for Moral Rearmament or Great Revivals. And those needn't be religious, they can be the secular virtues being championed by the leaders of the day (whether that be Thatcher with "Victorian values", something I think she got very wrong, or current progressives and "we should pay reparations").
To quote "The Screwtape Letters" about fashions in morality, though this is from the point of view of a devil trying to divert attention away from the real problems of the time:
"The use of Fashions in thought is to distract the attention of men from their real dangers. We direct the fashionable outcry of each generation against those vices of which it is least in danger and fix its approval on the virtue nearest to that vice which we are trying to make endemic. The game is to have them running about with fire extinguishers whenever there is a flood, and all crowding to that side of the boat which is already nearly gunwale under. Thus we make it fashionable to expose the dangers of enthusiasm at the very moment when they are all really becoming worldly and lukewarm; a century later, when we are really making them all Byronic and drunk with emotion, the fashionable outcry is directed against the dangers of the mere "understanding". Cruel ages are put on their guard against Sentimentality, feckless and idle ones against Respectability, lecherous ones against Puritansm; and whenever all men are really hastening to be slaves or tyrants we make Liberalism the prime bogey."
That overlaps with the rationalist idea of being careful about the advice you find attractive.
Worth reading for the recent subscribers (and worth reading *again* for the dinosaurs).
https://slatestarcodex.com/2014/03/24/should-you-reverse-any-advice-you-hear/
"I think it goes in cycles"
YES - supported by my post containing: "Civilization Cycles Compared"
And to extend the thinking of your posted book quote with the same author:
"Every now and then they improve their condition a little and what we call a civilisation appears. But all civilisations [civilizations] pass away and, even while they remain, inflict peculiar sufferings of their own probably sufficient to outweigh what alleviations they may have brought to the normal pains of man. That our own civilisation has done so, no one will dispute; that it will pass away like all its predecessors is surely probable.”
--C.S. Lewis, The Problem of Pain, 1940
In summary, I'd say some things change(Whack-A Mole) but overall there's a consistency(same game) as a society/civilization morally declines(runs out of quarters), then resets(gets change for 5 dollar bill).
People don't complain about moral improvement. It's a survivorship bias of arguments.
Rome started out as a town of, basically, bandits. Its morals dramatically improved before declining.
This reminded me of Schwitzgebel's work (http://www.faculty.ucr.edu/~eschwitz/SchwitzPapers/BehEth-140123a.htm) aiming to show that ethicists were not more moral than others. But then you look at their measures and they are things like: "Ethics books were more likely to be missing than other philosophy books: 8.5% of ethics books that were off the shelf were missing or more than one year overdue, vs. 5.7% of non-ethics books, a risk ratio of about 1.5 to 1 (66/778 vs. 52/910, CI for difference +0.3% to +5.2% , Z = 2.2, p = .03)."
To be fair, asking people "how moral are you" is not going to work very well, and propensity to crime needs a very-large sample size due to very-low base rate (especially when you control for class, since ethics professors are high-class). In 2014 there wasn't sufficient tech to straight-up mind-read people (and it's questionable whether even a full "download all memory" would achieve this, since a lot of dickery comes from *not caring* and thus isn't remembered). I suppose you could put bugs on people and examine their daily lives for a year, but then you'd get eaten by the IRB because the other people helped/harmed by those actions haven't consented to the research.
Trying to present quotes from Roman Empire-perdiod historians as evidence that moral decline is all alarmism and illusions is particularly ironic, when Rome is a canonical example that a civilization can spoil, decay, and collapse. One can almost conclude that we're dealing with New Dark Age cheerleaders.
Good point.
*Late Antiquity
It's from Livy which means it was over two hundred years until the Roman Empire's peace and prosperity collapsed. And about another two hundred until it actually fell.
And yet, Caligula took power just a couple of decades from then. Evidence suggests that a nation can bear plenty of ruin indeed, but not an infinite amount.
Caligula was in power for four years and everyone agrees he was sane when he was put into power. His insanity came afterward, most sources say due to an illness. His reign was also not all that bad from the average person's point of view. He expanded the empire and built a bunch of infrastructure. But he upset the army and the aristocracy so they assassinated him.
A couple of decades from then Nero took power. I'm not sure what's your overall point? Do you claim that there wasn't widespread depravity and corruption in the Roman Empire, long before it eventually declined and collapsed? That would be an interesting contrarian perspective for which I'd appreciate a source.
You want a source to prove a negative?
My point is that if you want to argue "depravity and corruption led to collapse" then you need to show that there was an uptick in these things prior to collapse. It hardly matters if there was two centuries before the collapse. If anything it's a problem for your theory since it means that an uptick in depravity and corruption didn't lead to a collapse then.
And if you're proposing some model where each generation gets worse and worse and it accumulates you need to explain Vespasian or Marcus Aurelius.
>you need to show that there was an uptick in these things prior to collapse
Not necessarily. A civilization can stumble along at about the same level of depravity and corruption, until a serious crisis finally knocks it over.
>you need to explain Vespasian
Was met with "constant conspiracies" against him.
>or Marcus Aurelius
Ruled alongside Lucius Verus, a run-of-the-mill piece of shit.
That there were a few decent rulers now and again doesn't disprove overall squalor.
From where I sit, "morality" consists of a set of rules that Good People (TM) impose on everyone else. Women should be subordinate to men. Blacks should be subordinate to whites. Particular types of clothing should be worn, indicating one's status in the social hierarchy. Sexual activity must be done in the correct way, with the correct person. Everyone should participate in religious rituals. Etc. etc. somewhat ad nauseam.
I don't think my use of this term is especially unusual in my generation - just about everyone who felt they were should-on by the Religious Reich tended to adopt this usage. We remember the Moral Majority (TM), and what it claimed was "moral".
Perhaps some of my peers have mellowed with age. But if I'd been responding to any such survey, the results would be somewhat "through the looking glass'.
This is also my context for these researchers' ideas of "objective morality". They've picked a set of rules - perhaps less pernicious than those purveyed by the Religious Reich (you do say they appear to be left wing) - and tried to create a metric based on those rules. Unfortunately, there isn't any uncontested set of rules that all will accept, even in a single generation - as you point out for the specifics of their "objective" metric.
Perhaps any attempt to measure people's perceptions of moral trends tends to measure conformity - to what extent does everyone in the society agree on the same shoulds and oughts?
Consider two types of violators of moral rules:
- the sinner violates rules that they agree with, or at least give lip service to. They conceal their sin if possible, and repent when caught.
- the outsider violates rules that they don't agree with. They are more likely to argue that your rules are insane than to conceal their activities.
I have no non-anecdotal evidence, but I strongly suspect that outsider behaviour counts *much* more than sinner behaviour in people's perceptions of what they see as immorality. And this is particularly true when the outsiders are right there in your face, claiming equal or higher status, rather than in some benighted foreign territory.
And that's what we have in the US - two or more separate solitudes that can't help but see each others' conflicting rules and behaviour.
When more than 50% of the country doesn't think my rules should be imposed, *whichever set I believe in*, it sure looks like moral decline, at least if I believe there was a recent time (the 1950s?) when there was something a bit more like majority agreement.
You've got a new set of those now, just with the subject and object of the sentence reversed.
I think you're right that there's a moral divide that maps onto the political divide.
I wonder whether the unity of morals and politics is a new phenomenon, and whether one could prove it to be so. Certainly in Britain the main political parties both had traditionalist (i.e. Christian) and liberal wings, which would vote with the equivalent wing from the other party on moral issues, as recently as the 1970s.
(Does a situation like this indicate that a country cares more about economics than morality? If morality was more important, one might expect to get Christian and liberal parties with left and right wings. I think this might have occurred at times on the continent.)
"I wonder whether the unity of morals and politics is a new phenomenon"
Based on cursory knowledge, I'd say that Great Schism was Western Christianity's well-intended effort to maintain theological integrity but over time became the consolidation of power. I've been reading an annotated version of the Philokalia(Orthodox companion to the Bible composed of quotes from the first 1500 years of Christianity) and there are stark difference compared to the modern versions expressed by Western Christendom (pun intended). It's far more practical and sensible - fitting much better with eastern philosophy.
If politics isn't about values, what is it about? Well, techniques for achieving them: but most people don't know much about economics and the like, so it smostly about values.
I'm inclined to think of this in terms of personal virtue, and it strikes me that there is a component to one's moral character that is independent of their moral beliefs. Let's go back to Rome again and pick some noteworthy historical figures as examples, say Cato the Younger and Sulla, and think about their moral character. The reception of Sulla ranged from a monster among his political enemies to being treated with great embarrassment even among his fellow Optimates. In contrast, while Cato has received criticism for being too uncompromising in all ages, few would dispute his moral integrity, both then and now. And that's despite the fact that he was championing the sort of values that would e.g. have husbands kill their wives if caught drinking.
I would suggest that there's two components to one's morality: the ethical theory that concerns oughts and ought-nots (and is in itself composed of purely arbitrary or at best contextual mores like what is an appropriate way to dress, and a body of more rigorously constructed morality that is not /objective/ per se, but given assumptions from evolved human psychology, such as that pleasure is better than pain, can be demonstrated by an argument, and that I would argue has grown at least a bit during the course of human history), and then one's virtue, such that a virtuous character is more capable of being their best self in embodying their ethical beliefs. For example, in one time and context a just character might kill their wife for drunkedness, while in another time a just character might fight against police brutality, but the justness of their character is timeless: we are still inclined to think Cato is a paragon of cardinal virtues even though his ethical beliefs have something reprehensible to anyone alive today, and I would like to think that the Romans too could see the virtue in our contemporary moral examplars, even if they would think their cause is beyond misguided.
And this raises a question that I think is in principle answerable although I don't know how: are contemporary people more or less virtuous than people were 10/100/1000/2000 years ago?
It sounds devilishly hard and possibly controversial to flesh out the specifics, but I tend to agree with such a view, that there is an important universal element to the complex construct that we call morality (that could be called "virtue" or "character"), and then a huge variable cultural construction on top, or even to the side.
The interesting part then would be to try to gauge if people are less moral in the intrinsic aspect than they used to be.
What I find strange about discussions on this topic, is that people tend to either propose 1) a complete stasis throughout history, or 2) a recent decline. Even before looking at any data or polls, I would tend to hypothesize a basically chaotic curve that goes up and down, with trends in either direction and at vastly different speeds at different times and locations.
Scott's comment on wealth makes a lot of sense; global wealth makes us less obviously dependent on each other's goodness, so social mores inevitably tend towards more individualism, which can be felt as a loss of morality. My first impression is that this would mostly affect the cultural aspect, i.e trying to achieve 1940s versus 2020s morality. Whether it would actually affect people's intrinsic goodness seems to me an open question.
This isn't a bad idea, but it does have some problems. Some ethical theories are easier to conform to than others. Some ethical theories are even ones that evil people have biases towards conforming to (such as ethical theories that require gossip, busybodying, or beating up or mobbing people). Note that this isn't the same as whether the beliefs of the theory are reprehensible.
People nowadays are very virtuous when it comes to ganging up on evil people on Twitter.
Not only is this article below the supposed high standards of Nature but if I recall correctly this messy study is based on Mastroianni's PhD. Social "sciences" are doomed.
You remembered correctly -- on the subject of Nature's modern sociopolitical self-immolation, I was rereading the NYT article today on the James Webb telescope naming "controversy"** and I was unhappy but unsurprised to read that Nature printed an editorial in support of the renaming campaign.
** If you're unfamiliar, it turns out an unsubstantiated rumour about James Webb being homophobic had been started and propogated by a couple prominent physics activists. The rumour was eventually debunked, but the slander operation continued unabated for a while until the issue stopped getting political oxygen.
Great post! I skipped on the "study" (delete unread), as I was sure not to learn anything new. I never skip Scott as I always learn/think sth. new. My morals should be 1970, and they are in some ways, but maybe not the whole package (church on Sunday was a thing in my family and most others/ divorce, kids-out-of-wedlock did happen rarely/ gay?/ LBTIAR+? kidding).
I do believe our "morals" in post-war Germany improved clearly: people born in my country are usu. acting much more domesticated today than they did in 1986, and from what I read, 1962 was barbaric (google: Straßenschlachten in München 1962 - street-battles weekend after weekend for no apparent reason) . The past is a foreign country. - We do have more people arriving with other Schelling-points nowadays, true. But I see some of their points moving fast.
Quick and dirty answer: yes and no.
Yes, an "illusion" because modern we have rejigged morality to mean "all the old stuff our forebears said was sin and dumb cruft like that, we now say is perfectly cromulent and normal". The same way that owning slaves was not immoral in a society where slavery is commonplace and accepted, but only becomes immoral when later generations decide "no, you neither should nor can own slaves". Morality is subjective, a standard we draw up new measures for in every generation. There never was a Golden Age of the past where everyone was perfectly moral; it is up to us to define and create new Golden Ages.
No, not an "illusion" because hell yeah there's a moral decline from the standards of the past. Owning slaves is always wrong because humans are not property, and it doesn't matter if it's a thousand years ago in the Classical world or today in Africa. We've redefined morality to mean "the things we disapprove of" and that can be racism or sexism or transphobia, but the things we like and want to do are now okay - no they're not, not if you cleave to a standard of absolute/objective morality. We flatter ourselves that we are becoming kinder, nicer, more moral by comparing present day to the worst parts of the past (slavery, colonisation) but we omit comparisons with social and civic values we find inconvenient or which would leave us in a worse light.
Fair enough, self-deception to see ourselves in a favorable light is surely a powerful bias. But the big question is to distinguish the aspects of this gross construct called 'morality' that just change over time like the weather, and those that actually constitute some kind of 'moral fiber'.
It sounds very neat to cleave to some standard of absolute or objective morality, but the present reality is that there is no society-wide agreement on such a thing. Hell, I can barely agree with myself from one day to another on such matters, let alone with fellow forum posters from the other end of the world!
People can and do own slaves, depending on what you mean by owning. Domestic and industrial workers (generally people without legal residence) can be trapped to do work.
The difference between now and then is that trapping people isn't as common and isn't respectable.
Indeed, fifty years down the line, owning pets might be seen as slavery (claiming to own non-human animals and forcing them to live under inhumane conditions, sterilising them, keeping them from the company of others of their kind, and so on).
Is it moral or immoral to own a pet?
Yup, some of the more extreme animal rights advocates' views look like a new moral panic being born...
( Of course, fifty years is long enough that the debate of 2073 might be between ASIs on whether it is ok to own human pets. I wonder if they will neuter them? )
I'm not sure about these, but I also see other possible explanations.
Could a perceived decline in morality also be related to more and more of our lives being governed by large institutions which, while consisting of moral people, are driven by other incentives, e.g. profit maximizing companies?
Another reason for perceived morality decline could stem from less direct interactions with other people and more of our view of morality being affected by news and social media where immoral actions might be more widely publicized.
So while the average person might be regarded as equally moral as before, society could still be perceived as less moral.
<i>And this is part of why I find the introductory quote by Livy so annoying. What was morality to Livy? Respecting the lares and penates. Performing the ancestral rites. Chastity until marriage, then bearing strong children (Emperor Augustus’ famous law demanded at least three). Martial valor and willingness to die pro patria. Commoners treating patricians with the respect due a noble class, and patricians treating commoners with noblesse oblige.</i>
Since the paper uses violence as a proxy for moral decline, it might be interesting to see how violent Livy's own age was. Of course, we don't have crime statistics for the ancient world, so we can't really say whether personal murders or assaults were more or less common, but political violence was certainly prevalent. Livy was born in 59 BC, and published his first books probably around 27-25 BC. Between the time of his birth and becoming a published author, therefore, Livy would have seen no fewer than *six* civil wars (Caesar's Civil War of 49-45 BC, the War of Mutina 44-43 BC, the Liberators' Civil War 44-42 BC, the Sicilian War of 42-36 BC, the Perusine War 41-40 BC, and the civil war between Octavian and Anthony 32-30 BC). His father's generation (say, the forty years before Livy's birth) would have seen another five or so (depending on whether you consider the Sertorian War as separate from Sulla's second civil war, and whether you count the Third Servile War or not). That's... really quite a large number of civil wars, especially when you consider that Rome had once been unusually politically stable compared to other Mediterranean city-states.
And it's not as if the Roman elite were all upright and virtuous between their bouts of civil warring, either. There are plenty of anecdotes from this era about the corruption and rapacity of Roman governors. Again, we don't have precise statistics, but opportunities for corruption had certainly increased (more distant, wealthy provinces --> more opportunities to extort stuff from the locals with minimal oversight from the government back in Rome), so it's at least probable that the anecdotal evidence does capture a real trend here. And of course, this rapacity didn't remain confined to the provinces, either: Sulla proscribed many citizens simply to get at their wealth, and the Second Triumvirate were notorious for seizing people's land to redistribute to their own soldiers.
On a more personal level, I remember my classical lit professor at university used to argue that there probably was more extra-marital sex in the late Republic than in previous eras. Basically, the big wars of the period (both civil and foreign) took elite men away to the provinces for sometimes years at a time, while their wives and daughters were usually left back in Rome. So you had a large group of upper-class women with both the motive to commit adultery (since the only alternative was going without any sex at all for years on end) and the means to do so (your husband can't easily keep an eye on your behaviour when he's off fighting in Gaul or Syria).
So we have a state trapped in a cycle of political violence, with corrupt and extortionate rulers, and high levels of adultery among the upper classes. That sounds like a situation where someone might reasonably say "Yep, moral standards aren't as high as they used to be."
Also... These sorts of "Here's someone in the past complaining about falling moral standards" anecdotes usually expect the reaser to fill in "...but everything worked out fine anyway, so clearly it was all just a big fuss over nothing." But let's consider what actually happened in the late Republic. On the one hand, the Roman state survived, so I guess things worked out fine in that sense. On the other hand, you had a good century or so of escalating political violence, culminating in the abandonment of Rome's traditional constitution and the imposition of a military dictatorship because that was the only way people could see of stopping the continual civil wars. I think it would be quite reasonable for someone to consider this a pretty bad outcome.
Given that this is about trends, it does seem notable that all those civil wars happened before Livy's thirties. The second half of his life was the beginning of the Pax Romana. He straddles the divide between a century of violent internal strife and two centuries of unprecedented peace.
I think a lot of people misunderstand the early Imperial writers' complaints about decadence. They aren't saying that the barbarians are going to start beating down the gates of Rome any time soon; they're saying that the Roman people have become unfit for self-government, and are therefore doomed to live under tyranny (a much more plausible claim, given what actually happened).
Both the MG paper and the post cite Livy. Funny nobody cites Ecclesiastes "Say not thou, What is the cause that the former days were better than these? for thou dost not enquire wisely concerning this".
> Both the MG paper and the post cite Livy.
Actually, they don't. The post cites the MG paper, the paper cites some book... I would not be surprised if the book cites another book or paper, et cetera, until after a few steps the chain breaks somehow (the N-th source does not provide a source, or does not even mention the thing it was referred for).
I tend to agree that MG are coming in with their own set of biases; if morality is not declining, then people today are as moral as people in the past.
But that also means that people in the past were as moral as people today, and we can look at things like "racial injustice" to see that is not so.
So there can only be no moral decline if people today are *more* moral than people of the past. And if it is possible to improve morally in some areas, then it must also be possible to decline morally in some areas, unless we also propose that the present is becoming more moral on *every* measure.
Also, they slip this little nugget in:
"The United States faces many well-documented problems, from climate change and terrorism to racial injustice and economic inequality—and yet, most US Americans believe their government should devote scarce resources to reversing an imaginary trend."
But "racial injustice" *is* a moral issue, and we may take it from the above that MG feel one where we are *not* more moral, that we have stayed at the same level of immorality as the past (or maybe even got worse) for this measure. And there are calls to devote resources to reversing this trend of "racial injustice".
So what is it? An imaginary trend where we don't need to devote scarce resources to reversing it in the case of "racial injustice", or it's only imaginary when it's about issues conservatives care about but for liberals/progressives, it really is a genuine example of moral decline which must be reversed?
Well, culture does have a cumulative effect. Increasing numbers of people, more interconnectedness, better communication and possibly more leisure time together have brought us from survival to luxury. If accumulation works to such an extent on knowledge, technology and the arts, why could there not be some amount of general progress in morality throughout history?
On one side we have the basic relativity of things - what we grow up with is our frame of reference for "normal", and we accept it as an implicit baseline, warts and all. On the other side, we have the basic capacity to put ourselves in another's shoes, and to notice injustice towards them, but it requires attention and a bit of distance from the old normal. So once someone has pointed out that, for example, slavery is unjustifiably wrong, it does resonate, and it's very hard to put it back in the bottle.
These days a similar battle is happening around animal rights. It remains to be seen just how far our capacity for inter-species empathy goes.
I think it's a mistake to assume that the present situation is normal. While morals may drift somewhat from generation to generation, I think the recent moral upheavals are damn near unprecedented, with views of morality on many issues completely reversing within the space of one or two generations, e.g. from the point where you could be jailed for being homosexual to the point where you can be jailed for saying that homosexuality is wrong. This type of thing just doesn't happen all that often.
Our hypothetical person born in the 1940 who complains about the moral decline he's seen is well aware of the metamoral nature of his complaint -- he understands it's not just that the world has got worse according to his moral rules, but that the world has replaced his moral rules with a whole new set of moral rules. Of course the new set of rules is better by the standard of the new rules and worse by the standard of the old rules.
"e.g. from the point where you could be jailed for being homosexual to the point where you can be jailed for saying that homosexuality is wrong"
See this Dave Allen routine:
https://www.youtube.com/shorts/dK43yHIUMKA
Now that is both hilarious and apt.
I wonder whether the Protestant Reformation felt like a similar upheaval, with (in some places) being a loyal Catholic went from being the default to being very wrong.
In theological terms, certainly, although both sides still agreed on what constituted moral behaviour.
> you can be jailed for saying that homosexuality is wrong
you cannot be jailed for saying that homosexuality is wrong. (You might be canceled.)
Depends where you're at. In the UK, you might at least get arrested:
"On 2 September 2006, Stephen Green was arrested in Cardiff for distributing pamphlets which called sexual activity between members of the same sex a sin. On 28 September 2006, the Crown advised Cardiff Magistrates Court that it would not proceed with the prosecution.[19][20] [...]
On 20 April 2010, police arrested Dale McAlpine, a Christian preacher, of Workington in Cumbria, for saying that homosexual conduct was a sin. On 14 May 2010, the Crown decided not to prosecute McAlpine.[26] Later still the police apologised to McAlpine for arresting him at all, and paid him several thousand pounds compensation.[27]"
https://en.wikipedia.org/wiki/Hate_speech_laws_in_the_United_Kingdom
That was before the Great Awokening. Might turn out even worse now.
True, police do sometimes arrest people without legal basis. Likewise in Canada it's technically legal for women to go topless but I wouldn't be confident the police won't arrest. So fine, I amend my comment to "you cannot be jailed legally".
Okay, because I'm not working for the rest of the day (finished the tasks that needed doing) but I'm up and online so I might as well be doing something, let's have a gander at some of the questions.
(1) "“Is there any area near where you live -- that is, within a mile -- where you would be afraid to walk alone at night?”
Depends. There were "bad" parts of town within a mile that have since improved vastly, and the new "bad" parts are a bit more than a mile away. I absolutely wouldn't be walking alone at night on the weekends when the pubs let out, because a bunch of drunk aggressive idiots getting into fights and petty vandalism aren't too discriminating about not picking on innocent passers-by. But quite likely, 'twas ever thus everywhere.
(2) "They mention in the text that these kinds of questions did better than others; 50% report improved treatment of gay people. But what are the other 50% thinking??! The answer has to be something like “2002 to 2013 is too short a time to measure even extremely large effects that were centered around exactly this period”. But then what does that mean for the rest of their data?"
Re: the other 50% who don't think treatment of gay people improved in 2013, I Googled "gay marriage 2013" and found that was the year the Defence of Marriage Act was struck down by the Supreme Court. So if you're pro-gay rights in 2013, you might well feel "We had to go to the frickin' Supreme Court to get our natural rights because the knuckledraggers passed laws to deprive us of them and *still* state bans on gay marriage are not ruled unconstitutional, improved treatment my sparkly kinky Pride ass":
https://www.npr.org/2013/06/26/195956196/supreme-court-extends-gay-marriage-rights-with-two-rulings
(3) "Generally speaking, would you say most people can be trusted or that you can't be too careful in dealing with people?"
Oh, gosh. My native tendencies make Eeyore look like Pollyanna, so I'm heavily in the "Trust but Verify" camp. This has as much to do with "are you an optimist or a pessimist?" as it does with perceptions of moral decline. And when incentives to be backstabbers in order to get ahead are more plentiful and more encouraged ("only losers take the bus") and there is more social atomisation, more fragmentation, less cohesion, the perception of work and the loyalty owed between employer and employee has changed, 'greed is good' and so forth, when you are interacting with more strangers and there are a lot more scammers out there, then yeah, I think it likely that people will perceive "you can't be too careful".
About your point number 1: that mostly matches where I live. There are areas within a mile that I'd be afraid to walk alone at night (or in the day). My own neighborhood feels safe (or safer). But those "bad" areas have probably improved over the last 20 years or so. But then, there's a hipsterish neighborhood I live close to. It apparently was a horrible place in the 1980s and early 1990s, became a gentrified and therefore "safer" place in the late 1990s through mid 2010s, and now is starting to get the reputation of being more dangerous.
Edited to add: And the reason this is relevant is, if one lives in a very large city (as I do), one almost always lives within about a mile from a dangerous at night area, regardless of whether "morality" (however defined) is improving. Of course, the actual level of danger in the "dangerous" neighborhoods may change. I'd probably feel safer in the hipsterish neighborhood at 2am now than I would have in the other neighborhoods at morning rush hour (which for some reason "feels" the safest time to be their) 20 years ago.
I've always thought poll question (3) is a false dichotomy. Yes, most people can be trusted. And yes, you can't be too careful, because the small minority who shouldn't be trusted can do a lot of damage if you do trust them.
That's a good point! There was an earlier discussion about locking one's doors. I lock my doors when I leave the house, not because I expect the average person to steal anything - but all it takes is for 1% to be systematic thieves to justify the precaution.
The first question really should be changed to emphasize human threats. I'd be terrified of walking into the swamp that is a quarter of a mile away from my house at night because I might step on a poisonous snake or a gator. That has no bearing on the morality of my neighbors. But of course, the authors didn't write the questions, it just an old one that kind of works.
To be fair though, the degree of "trust but verify" varies greatly between societies. For example, I was born in the glorious USSR, but since then have been assimilated into your Western capitalist pig-dog culture of global imperialism (tm). So, the other day, I was eating at a local cafe, noticed that I ran out of sugar for my coffee, then got up and got more sugar from the tray across the room -- leaving my cellphone unattended at the table in the process.
When I realized this, I felt a brief spike of terror, because in the glorious USSR my cellphone would absolutely be gone by the time I got back to the table. If I went to the police and tried to file a report, they'd look at me like I was crazy: "you left your stuff and now it's gone, it's your own damn fault, now stop wasting our time". Yet here in America, in the area where I live, my cellphone was relatively safe. In fact, if someone tried to take it, I bet other bystanders (or the staff) might even try to stop him.
However, note that both Soviets and Americans would see what happened as perfectly moral (or at worst morally neutral). And there are places in the world today where even flashing an expensive cellphone on the street will get you stabbed (or at least robbed); and the denizens of those places consider *that* to be a morally neutral outcome, as well. It all depends on your perception of normality.
There is probably an element of illusion in the perception of moral decline in that people get more concerned about these things as they get older and have children and eventually become more dependent. Just as there are always older people complaining about how young people don’t know how to speak properly.
But that doesn’t mean there can’t actually be moral decline.
My guess (not having looked in detail) is that the measurement graph is not considered much evidence in favour of anything because, frankly, it's a noisy mess. Sure, the average of those points is dropping, but those points are so all over the place that any change would need to be either super drastic or super sudden to be significant for this sort of analysis... I guess
Also, Scott is alarmed that the average *perception* of trustworthiness decreased, but doesn't that kind of match the paper's thesis? Scott seems to assume the perceived decline reflects a real decline in trustworthiness.
This can get a bit philosophical... Can you really be said to be trustworthy if nobody trusts you ever?
Trust levels are certainly not at zero, but yes, a person can be trustworthy without being trusted.
Really good take on the recent lack of moral decline paper. It would be great to see a detailed response. When reading the original by @Adam Mastroianni a quote from the British historian Dominic Sandbrook came to mind: ‘there are moments in history when disputes about history, identity, symbols, images and so on loom very large. Think about so much of 17th-century politics, for example, when people would die over the wording of a prayer book.’ Those people would think we were immoral beyond belief … https://www.theguardian.com/world/2021/jun/13/everything-you-wanted-to-know-about-the-culture-wars-but-were-afraid-to-ask
Hypothesis: People think the world is getting worse because, if they grew up in an environment which was reasonably trustworthy *to* *them*, they will think morality is in pretty good shape. Later, they will experience various defections, and think that things are getting worse.
Also (and thanks for underlining changes in morality) people when they're young may be more willing to believe that standards of public morality are being upheld and acquire a more accurate, cynical view with time.
Possibility for recent decades: computers have made defection much easier. I remember when you could pick up a phone, and you would almost certainly be hearing from a person who wanted to talk with you personally, not an advertisement or a fraud or an automated system doing who knows what who just hangs up. People may have been just as willing to do phone scams in the sixties, but the opportunity wasn't there. A steady drizzle of small defections sounds like it isn't being caught by those studies.
I came up with "nostalgia is fond memories of when your knees didn't hurt". Is it possible people think the world is getting worse because their health is getting worse?
"computers have made defection much easier" Agreed. Also more visible and better reported - "trending scams" reports and similar warnings...
>But when people say “there was more morality back in the good old days”, they rarely mean “in 2000 compared to 2015”. Even if moral decline were constant and linear, 2000 - 2015 might be too short a period for ordinary people to notice the difference.
I've seen multiple commenters who say that gay rights are fine, but that trans rights are a bridge too far and an example of moral degeneracy. If we take them at face value, then they really are yearning for a return to the good old days of the 2000s.
I think if you asked those people they'd disagree with you about the definition of "trans rights."
No, it's not just people saying "I support trans people but we can't allow children to transition" (or some other specific thing that they're against), you also can find people saying things like "LGB people would get lots of support from the right if they just dropped the TQ+ part" or "we were fine with gay marriage when we were promised that gay couples would be nice normal people like Pete Buttigieg, but now this trans stuff has gotten out of hand and we're suspicious of the whole movement."
That's the point -- "trans rights" meaning "transfolx get the same rights as everyone else" is different than "transfolx get additional benefits/options in order to make their life better." It's entirely reasonable to believe that free gender affirming care is not something that the government should provide (since you know, most people don't get free vision-correcting care etc.) without being tarred as being a transphobe.
I guess this idea that everyone adopts the zeitgest of their formative years and sees all changes to that as moral decline is actually a pretty good explanation. Empirically I believe it to be true. But there's a follow-up question - *why* do we do this? Why are we so morally inflexible? Why, when I grow old, will I be unable to accept whatever the cool kids believe?
You could call it moral maturation instead of moral inflexibility instead. And what you believe then, will be a firmer, more coherent, better calibrated and informed spin on your current beliefs (with some of them probably tossed out wholesale).
What the cool kids believe is relevant for the other kids. But what are silly children's games to a mature, experienced adult?
But I don't think that's the model of belief-forming that Scott Alexander is proposing. Our beliefs don't mature as we grow older, they stay largely similar. The generation below also does the same thing - but with a different set of beliefs. The reason for generational difference is not down to differing levels of maturity, it's down to having grown up in different eras, or in different cultural context. We're not in different places because we've moved further than them from the same initial spot; rather, we started in different places. It's a cohort effect not an age effect.
One other thing I think you can do to help make sense of things is swap "social cohesion" for morality.
I think that the way social cohesion tends to work across much of history is there are brief periods where cohesion increases dramatically, followed by long periods of slow decline. The slow decline of social cohesion is often an *increase* in living standards, because folks are making fewer sacrifices and focusing more on their own well-being.
If this toy model reflects reality, it also mechanically explains the way that observers throughout history keep noting apparent moral decline. Every generation except a few rare exceptions live and die within those long periods of slow social decay.
Interesting. By "there are brief periods where cohesion increases dramatically" do you mean crises when everyone has to "pull together" to survive (e.g. London during the Blitz)? Or moral and religious revivals? Or (in a much darker sense) times when the morally-nonconformist are killed off en masse?
Probably a mix tbh.
To elaborate the toy model, "normal" conditions lead to slow decline of social cohesion. Extraordinary conditions create high-variance moments. Sometimes they're really bad and you have a catastrophic collapse, other times they're really good and everyone pulls together.
Cases of collapse create an environment in which new modes of social organization can compete and whichever is best at generating social cohesion tends to win, thus leading to to another high point from which the slow process of decline can begin.
This is obviously sort of like the "hard times lead to strong men, strong men lead to good times, good times lead to weak men, weak men lead to hard times" meme. But it suggests it's more of a punctuated system than an even cycle. Crisis moments cause a lot of creative destruction and rewrite the rulebook, then future generations coast until conditions force them to adapt to something new.
To top it all off, we could also add that not all high cohesion societies are equally moral. Some high cohesion societies might be very moral, and their decline is regrettable. But if the prevailing society is oppressive or otherwise bad, social cohesion may be a force for evil, holding together something that ought be allowed to perish.
https://youtu.be/nUBtKNzoKZ4
Lol, I had forgotten about the difference in rotary phone numbers !
Also, reminded me of something that happened recently : I hadn't used my debit card in the PIN mode for so long (compared to the very short range radio hold mode), that when I actually went to to the bank to put cash in, I realized that I had forgotten it !
(Thankfully, still did, during the trip to the nearby store.)
"There is less homicide today then in 1900" (do you mean 1960?) (note 2). In any case, when comparing homicide rates across time, you must control for the fact that medical care has greatly improved across time. A patient with a stab or gunshot wound that would be lethal in earlier times, is now much more often saved on the operation table. Hence fewer homicides, and more (only) violent assault.
Yup. Anyone have some estimates on the numerical size of the effect? Ideally one would want to send 100 NIST-standard reference material shooting victims to an ensemble of ERs every few years, and track the trend in survival rate, but this would be hard to arrange... :-)
This study fits well in the category of "human history started in 1965 and everything before that was basically cavemen and cartoonishly villainous robber barons twirling their moustaches."
When I here or read about a poll I skip delete and move on down the road. I do read I do listen but not to polls.
Cjeers
The natural comparison would be to look at periods where people claimed moral decline vs moral strength, especially but not exclusively contemporaneously. These periods do in fact exist and the idea that everyone at all times has thought we are in moral decline is something of a myth. If your contemporary morality theory is right you should expect younger people to generally to think we are in a morally strong period. But I don't feel (I know, I know) that's the case today, for example. That the average 20 year old feels America's an especially moral society.
You have several examples even within the last century: The genteel morality of the Edwardian period vs the 1914-1929 excesses. The hardscrabble self-sacrifice of the 1930s to 1950s vs the free love, radicalism, and drugs of the 1960 to the 1970s. And so on. I'd start there to tease out the differences. (Also note, contrary to some political claims, those are not all especially conservative periods.)
I wonder if the last paragraph is meant to specifically address Mastroianni's paragraph on substack about his paper https://www.experimental-history.com/i/126157345/first-a-note-to-the-pedants
Thanks, upon reading this (especially the reasons driving them) looks like the paper might be less vague and worthless than I assumed, though it's still not great...
This may have to do with the definition of ‘morality’ more than anything else. At least some respondents will include sexual promiscuity and open homosexuality among the immoral behaviors that are increasing, together perhaps with tattoos and obesity. None of these have anything to do with being mugged or not being able to trust others with money, but they are all frowned upon by traditional religions.
> 50% report improved treatment of gay people. But what are the other 50% thinking??!
With the caveat that this is "epistemic standard: stab in the dark", there are certainly "progressive" spaces nowadays where it's forbidden to mention that treatment of minorities has got better, because that would be like claiming that things are maybe not so bad, which would call into question the whole need to burn society to the ground.
There is, of course, a kernel of truth to this idea - homophobia has not completely gone away, and there's still places and jobs and families where being gay means you're going to have a very bad time. I don't disagree that there's still lots of work to be done. But I also think it's both correct and relevant to mention that being gay went from illegal to legal, or that a gay person working in a democratic-leaning large corporation will have a much better workplace environment than a generation ago, at least as far as their orientation is concerned.
Worse than that, they'll tell you treatment of minorities has gotten worse because it's less explicit. I've interacted with people who will tell you without the slightest hint of insincerity that the US is a more racist place today than it was in 1955, and the fact that you see so much less overt day to day racism is actively treated as evidence of that claim.
Absolutely. And this very day, June 30 2023, we have an excellent example: *because* we live in a society where government power is being used to compel people to proclaim their approval of gay marriage, and SCOTUS said, "how about let's not", therefore the masses are essentially being told that Clarence Thomas has personally killed and eaten a dozen gay babies on the steps of 1 First Street. The first piece of evidence, the background situation where government power is being wielded this way, is just the moral arc of the universe: it has nothing to do with treatment of gay people, it's just the way things are. The latter, the single step back by SCOTUS, is evidence of supreme backsliding from this high ground; of worsening treatment. So *of course* people who have any ounce of trust in the media think that discrimination has gotten worse. It is inconceivable that they could think otherwise when the evidence is put into the boxes it is put in. There's no way a study repeated today would find anywhere near as high a number as 50% reporting improved treatment. But what's truly baffling is that Scott doesn't understand why. What is he thinking??!
"Compared to the past, have things gotten better, worse or stayed the same [regarding] treating African-Americans with respect and courtesy? (2002 vs. 2013)"
Uhm... what if a questioned person does not actually respect the hispanics, gays and/or the African Americans himself? If their mistreatment is framed as a problem, that can get better or worse, presumably depending on social attitudes, therefore some of the respondents must dislike them. Or it would be an unrepresentative poll and therefore worthless. But if a gay-disliking person notices that gays are treated with (from his perspective) undue respect and courtesy, he would have to answer, that the situation got worse. How are those questions not garbage in, garbage out? [maybe I miss some context?]
It's one thing to define moral progress being equivalent with nicer/kinder treatment. But I do not see, how it can even measure by that definition, given that the respondents would not necessarily share that assumption.
Eh... I think the authors did not actually use that kind of question? He states that "For now, we only care about the parts of morality where pretty much everyone would agree." in his "A note for the Pedants" (thank your for thinking of me!).
https://www.experimental-history.com/p/the-illusion-of-moral-decline#%C2%A7first-a-note-to-the-pedants
"Everyone whose opinion I care about", that is.
I had been assuming that this general study was accurate and fit with my perception of how people experience negativity bias, however I did think of an alternate explanation apart from a critique of the study itself
At some point I switched from a "things are declining" conservative to a "many things are great and getting better and it will all work out" also conservative. But consider the implications if it was a very common belief that things will work out, especially that they will pretty much naturally work out. A society without at least a strong component of people who believed this might be pretty guaranteed to go ahead and experience moral decline
It might be like a driver on a road trip who notices they are 3/4's of the way to the destination, so clearly the belief that taking your foot off the gas will reduce your momentum has not borne out. In this sense negativity bias and accurate assessments of actual moral decline may both exist as a social and personal form of keeping your foot on the gas
"But consider the implications if it was a very common belief that things will work out, especially that they will pretty much naturally work out"
Is that not The Whig Version Of History? (And looking it up, TIL it comes from a 1931 book by Herbert Butterfield):
https://en.wikipedia.org/wiki/Whig_history
"Whig history (or Whig historiography) is an approach to historiography that presents history as a journey from an oppressive and benighted past to a "glorious present". The present described is generally one with modern forms of liberal democracy and constitutional monarchy: it was originally a satirical term for the patriotic grand narratives praising Britain's adoption of constitutional monarchy and the historical development of the Westminster system."
Online copy of the book, which is short enough to read quickly:
https://www.clarehall.cam.ac.uk/wp-content/uploads/2022/08/Butterfield_selected.pdf
"The danger in any survey of the past is lest we argue in a circle and impute lessons to history which history has never taught and historical research has never discovered — lessons which are really inferences from the particular organisation that we have given to our knowledge. We may believe in some doctrine of evolution or some idea of progress and we may use this in our interpretation of the history of centuries; but what our history contributes is not evolution but rather the realisation of how crooked and perverse the ways of progress are, with what wilfulness and waste it twists and turns, and takes anything but the straight track to its goal, and how often it seems to go astray, and to be deflected by any conjuncture, to return to us - if it does return - by a backdoor. We may believe in some providence that guides the destiny of men and we may if we like read this into our history; but what our history brings to us is not proof of providence but rather the realisation of how mysterious are its ways, how strange its caprices — the knowledge that this providence uses any means to get to its end and works often at cross-purposes with itself and is curiously wayward.
…Instead of seeing the modern world emerge as the victory of the children of light over the children of darkness in any generation, it is at least better to see it emerge as the result of a clash of wills, a result which often neither party wanted or even dreamed of, a result which indeed in some cases both parties would equally have hated, but a result for the achievement of which the existence of both and the clash of both were necessary."
"I think (not at all sure!) that this means “the year of the survey explained only 0.6% of the variance in responses”. That sounds tiny. But looking at the graph, the effect looks big. I would file this under “talking about percent variance explained is a known way to make effects sound small”, although I’m not sure about this and I welcome criticism from someone more statistically-literate."
The slope is striking but what's not shown is the literally thousands of data points to show how poorly that slope fits the data generally (aka low variance explained). You can already tell that birth cohort is a pretty big deal, perhaps age of respondent even. Both of these have something to do with "years" but are not the year of response variable. Another source is plain old-fashioned between-participant variability - people within a cohort are more similar to each other than people between cohorts, but even within a cohort, some people are trusting and others aren't. Year of survey response isn't the key factor in explaining variance in trust, other things are.
Hot take without having read the study or even this article yet: people love to quote writers from antiquity writing about moral decline as though that proves that moral decline isn't real. But of course: all of the societies from classical antiquity *really did collapse* centuries ago! And from what little I know about the fall of Rome, it really did have something to do with Rome's inability to produce younger generations with the tenacity and vigor of the earlier ones.
Civilizations have been worrying about moral decline forever, and civilizations have also been collapsing catastrophically forever. It is not obvious to be that these are unrelated.
I'm pleased that Scott's third point matches my comment on their substack at the time, though I used killing one's son as the example rather than one's wife.
The authors have a Substack? Where?
https://www.experimental-history.com/i/126157345/first-a-note-to-the-pedants
There it is, thanks for linking him.
Rereading my comment I see that I was agreeing and amplifying (though not maliciously -- I really like Adam and his substack) and strongly implied that the changes in morality from the past to the present were in line with the shift noted in Roman days. That is, people are becoming more accepting and compassionate, but this is a disaster from the perspective of traditional morality -- if we don't execute drunken or adulterous women (including divorcees) and beat children constantly for minor infractions, and put gays in the bog, and hang race-mixers from trees, how on earth are we going to maintain the /mores/ of eras where good upstanding people approved strongly of these measures? It's a chicken and egg problem.
Scott's thrive/survive notion is still the best attempt at resolving that problem that I've seen.
I agree that it's a useful filter but I'd claim it's still chicken/egg. All the conservative types that I know feel panicked about the world not because of crime or street defecation (both in places they already avoid and don't care about) or the economy (which they often appear to think is doing better than I think it is) but precisely because of the moral decline. If they weren't conservative already they wouldn't think they were just clinging to survival. What Scott says elsewhere about how tribes have a right to exist (I think in the "Archipelago" original?) might explain that a bit, but the solution is still to leave a threatened conservative tribe for lesbiantopia or whatever. And if someone doesn't want to do so, we get into all kinds of Bulverism -- conservatives want a strong society to whip them into shape because their internalized toxic masculinity makes them feel flabby and weak, or they're afraid of self-reflection because they just need authority, etc. I'm not saying Scott implies these, they're just examples of how it's a chicken/egg problem -- they wouldn't need X morality if they weren't in group X, but they wouldn't be in group X without it.
Yes, to be more accurate, it's the best argument against naive moral anti-realism.
Nothing has any right to exist, that's a liberal fantasy. Existence is clawed out of indifferent universe by any means necessary (which also includes liberal ones, when appropriate). It's just natural selection one level higher. There's not a universally correct morality, but there's a meta-algorithm which describes the correct morality for any actual material circumstances. Which is not to say that the Archipelago is a bad idea, having more diverse "laboratories" would help the meta-algorithm converge quicker.
Here though you're using "conservative" as some super-specific (USA?) ethnicity, while actual conservatives are about being very careful about breaking things, for instance conserving liberal values like free speech and abortion rights (probably for the last one, it's kind of on the fence since it's "only" been half a century).
While we're thinking about the Archipelago, how does a right to leave get enforced?
I thought you'd cited Adam Mastroianni before on ACX? I associate Experimental History vaguely with Erik Hoel's The Intrinsic Perspective, or Slime Mold Time Mold, or Infovores, all of which I think have also been discussed here.
Adam Mastroianni (one of the co-authors) is actually a reader of your Substack. He won an Honorable Mention in the Book Review Contest for Memories Of My Life, which you also reference here: https://astralcodexten.substack.com/p/galton-ehrlich-buck
Is there any historical period, now commonly thought of as a "golden age", in which the people living through it actually recognized it as such?
"The r^2 statistic of the graph above is listed as '-0.006'." No, the r^2 statistic (which is typically a positive number) is given as 0.008 (on page 56 of the Supplement), and the square root of 0.002 is about 9%. This does not affect your point.
Indeed, r^2 cannot possibly be negative!
There is a statistic that I find very interesting, not for predicting any decrease of morality (I agree with Scott, morality is clearly not something constant neither in time, places, or social niche. So speaking of change of morality is almost meaningless without much more context) but for predicting social unrest or stability: trust in government. 75->20% is big enough to mean trouble ahead. Especially together with overall trust level (between individuals) going down. My personal impression is that social contract is getting very thin in the western world, and those statistics confirm this impression...
Im afraid my default assumption about social-science research these days is that it is not about discovering truth, it is about establishing truth. Critical Theory Uber Alles.
In politics, there's not much difference. Critical theory is pretty correct about that, at least.
I think it's useful to imagine the results of a poll that asked, "Do you think your own morality is declining/increasing/staying the same?" I feel like I'm becoming a better person over time, and I suspect that most people would say the same (about themselves, not about me). Yet I would have an extremely hard time coming up with even imaginary statistics to support that perception.
If there are not perceptual biases associated with "morality over time" questions, then people would generally have to be becoming more moral individually while at the same time becoming less moral collectively. That's not impossible; one way to pull that off would be if babies of this generation emerge from the birth canal less moral than babies of previous generations. Seems doubtful.
Another way would be a change in the rate of improvement of personal morality over time: people now improve their morality less rapidly on average now than in the good old days. But that would show up in the responses to the self-improvement question. Also doubtful.
What if people are evaluating the change in morality over time by taking a heuristic shortcut and comparing the morality of old people vs young people? If so, improving personal morality requires old people to be more moral than young people, and the perceptions become consistent.
Here's what I think is actually going on (well, this plus a lot of other stuff): people evaluate society's morality in general by comparing it to their own moral code, which they perceive as superior. The young attribute this difference to societal decay, since what they learn about the past is biased toward noble people doing noble things. The old regard their moral code as the standard, and young people abandoning it represents moral decay in general. The latter is Scott's "moral drift", which is exacerbated by increasing exposure to outgroups.
Your hypothetical poll is interesting, and I think you're right that few people would say their own morality is declining. To me, the distinction between individual morality and collective morality comes from people judging their own morality by the behaviours they would like to exhibit, while judging society's morality by the behaviours they observe. With that in mind, a moral decline in my society is summarized by thoughts like "If I lived 20/30 years ago I would be able to act in the way I think is moral without being punished for it, but since I live today I have to act immorally to avoid harm". There's some cognitive dissonance involved but I don't think it necessarily means that the young and the old have a different moral code, young people could have a similar feeling.
The remaining question is how you get a decline in collective morality without a decline in individual morality, but I do think we observe these sorts of situations in other contexts. For example, take the current publish-or-perish structure of academia. Most (if not all) people involved think it's a bad setup, we didn't always have this kind of system in place, and I think if you polled them, most academics would prefer a different kind of setup and they wouldn't personally say they're in favour of the current norm. Yet, despite all this the system persists.
I disagree, I feel like I'm becoming less moral over time (I'm in my early 40s).
I feel like it's easy to be moral when you're eighteen and you don't have any real responsibilities or needs. "I'm going to be morally pure knight Templar and I will never do X, Y or Z". But as you get older you need to compromise more in order to live as an ordinary person in the world, and it turns out that trying to maintain those moral standards makes it difficult to interact normally with other people.
So are you becoming less moral or changing your moral standards to maintain an appropriate level of morality?
This reminds me of a bit in C.S. Lewis about the real temptation in marriage isn't lust, it's avarice.
' “do you think morality is declining?” and people always answer yes.'
Because it always is until it resets, then starts over:
Civilization Cycles Compared
https://drive.google.com/file/d/1btPKl8ynTBr32c17VfKt2mcgD_I97nUg/view
Outside the debate over what real moral decline has occurred or what that means, I do think that their theory on how people construct their mental model of moral decline based on memory and attention is the interesting part. One thing that struck me from the paper is that it seems you can get a rough “perceived decline per year” that is fairly linear and observable among people at any adult age from questions like “is such and such moral thing better or worse now compared to X years ago”. I don’t think there’s much of any metric that would tell you actual morality has declined in a linear fashion. Crime rates went way up and then back down in your graph and I’d be surprised if any other objectively measurable thing we want to use as a proxy for morality does show a strictly linear change. And it doesn’t seem plausible that morality shift from when someone was born to the present has any sort of constant rate of change either; I would be skeptical of someone who said the shifts in morality from 1960-1970 were the same in magnitude as the shifts from 1990-2000 for example. Your thermostat point is well taken, but the only thing we need to doubt is that people’s apparent linear perception of decline corresponds to reality. And there I do think something like the author’s model of memory and attention is valuable. It proves much less than they want, and maybe “people’s intuitive evaluations of the past relative to the present are based on error-prone psychological processes, which psychologists have said for years, and here’s another area where that seems true” is not exactly headline-worthy, but it does seem valuable.
Since this touches upon my area of expertise, a detail on Livy: The paper that Scott quotes, while very learned, perhaps goes to far on the issue of husbands having the right to kill women who drank. Augustus, Livy's contemporary, actually removed the right of husbands to kill their unfaithful wives altogether. Ancient patriarchal customs like the ius osculi – allowing the male relatives of a married woman to check her breath to see if she had violated a ban against drinking alcohol, which doesn't mean it give them the right to kill her – almost certainly were obsolete early in the republican period. Indeed, they were mocked mere decades after Livy and Augustus, for example by Agrippina the Younger when she asked her then-husband, emperor Claudius, to kiss her as a drinking test.
I don't recall a single example of a woman killed for drinking in Roman history, and tons of recorded examples of drunk women.
I wonder if there's something like, "virtues" in the virtue ethics sense tend to decline, because which ones are relevant shift and the older generation are used to thinking of the virtues they follow as inherent morals. And new morals occur but don't feel like virtues at first.
Or the balance shifts towards consequentialism.
Or the pendulum swings back and forth and sometimes we have new moral movements (let's be vegetarian, let's abolish slavery, let's fund international charities), and sometimes we get "we used to do this and now no-one bothers"
Proof by exhaustion is a very good way to put it. Follows the common psych survey paper trick of throwing a ton of cheap, small MTurk samples at the problem. Can't conclude anything meaningful from any of those studies. Possibly the 700 person one from Prolific is ok; not familiar with them. They make no effort to correct or de-bias anything, because you can't when you're running MTurk studies with 100-300 people each.
Explicitly asking people what they think morality was like over the course of 20 year increments before and after they were born is ludicrous. I have no clue how I would answer those questions, and I doubt anyone else really does either. It's a noisy, unnecessarily complex way to get at the general concept of 'subjectively, do you think things used to be morally better or worse than they are now'. And injecting necessary noise is fundamental to their results.
Worse, they then compound that problem by making a derived indicator based on that noisy, overly-complex set of questions and age ((moral decline in 2020 - moral decline at year of birth) / age), on a sample of 347 from a low-quality source (study 2c). *Then* they run a regression on that indicator. The relevant result is that age doesn't impact perception of moral decline. Aka they don't find a significant result. Because the test is comically under-powered and because the explanatory variable they care about is also in the target variable. Proof by obfuscation is another way to put this. They throw around a ton of unnecessary jargon to hide how simplistic what they're doing actually is.
Study 4 (the big one with archival surveys) is a mess, if I'm reading it correctly. Presumably they have all the individual survey responses, so they're regressing the various year indicators on the outcome. But that doesn't really make sense. They care about individual changes, but there aren't repeat responses. What they do is conceptually identical to regressing on the overall survey average, but with a lot of noise injected. And that explains why they can't see an effect in R2 even when you can see one in mean differences: year is a discrete variable with like 8-12 values at most for all of these studies. The variability within years is nearly always going to be far greater than the variability across years.
Even if you have a clear mean difference across years, if the variance within years is sizable and varies across years significantly (which are always true for survey time-series), or if the trend in the average response isn't strong and linear (which it rarely is for surveys), then you won't see an effect in R2. You will see it in the coefficients for the individual years. Which apparently they also didn't test because they just report on one year coefficient, so they probably used year as a generic trend indicator rather than a factor (testing for 'bigger number make survey change' VS what's the effect of year1, 2, etc.).
What would make more sense (especially because it's what they do earlier in the paper) would be testing bias-corrected mean differences of average survey results for every year-pair for every series. But they'd find effects if they did that and it wouldn't tell the story they want.
This is not good work. But it's a topic people like to speculate about and Dan Gilbert is famous, so good enough for Nature I guess. The short version of all this is that they wanted negative findings, so they picked under-powered samples and used noisy, complex metrics. It's like reverse p-hacking.
I suspect that they find that the year only explains 0.3% of the variance because they use additional predictors, such as birth year. So there is a decline across birth years, but within each cohort the difference is basically 0.
I suspect a trend towards "morality is harm reduction" not because of any Haidt principles but simply because we live in an age where we prefer observable performance measures. I mean folks (including me!) will literally argue online about whether a studio produced movie is "objectively bad," when it's clearly a matter of what you value in media. I wonder if morality is the same - it's values-based, but our age is one where unsupported value judgments are passe, so we have to be able to say "see, this value is important because it reduced this specific thing we all agree is a harm."
While I'm mostly this way too, there are tradeoffs in insisting on this standard like there are tradeoffs in everything. For instance, I'm likely to follow any safety instructions/regulations with my daughter, because in any individual case there's a clear harm for ignoring them vs. a much lower cost for following them. But it's entirely possible that in the aggregate this will teach her that safety is an overriding concern and risk-taking is to be avoided. This could create something like moral decline if spread across a generation - it would be extremely difficult to measure in an unbiased way, but still lead to generally worse QoL.
Note also that I'm still defining "moral decline" as "leading to a generally worse QoL." If I made this kind of argument to religious acquaintances they would say "morality is defined by doing what God wants." They might (depending on how infected with modern sensibility they are) argue that doing what God wants leads necessarily to higher QoL but even if it didn't they wouldn't change their mind about what is moral and what is not.
I expect a Pew poll conducted in 850AD would also have seen agreement with the moral decline hypothesis, just as it would with a 'kids today' question. The proper determinant that changes is 'morals' - which certainly change over time. Having said that, like the respondents in 850AD and those in 2023AD, I do think people pay less attention to moral questions, or are more willing to think they shall get away with transgressions. I suppose I would need a news aggregator from 850AD to see how the mass murders and child abuse cases might compare with today's. I suspect such a non-existent device would prove me to be talking out of my arse! Pinker would suggest so.
The violin plots in Fig. 2 of MG are atrocious enough to cast doubt on everything else in the paper.
Glad I wasn't the only one bothered by that - violins aren't appropriate for discrete data!
Scott, I'm going to unscrupulously pirate this post, edit it, and assign it to my junior high classes as supplementary reading next year. I have to teach the "decline bias" in critical thinking. It's always felt weird but I didn't have time to look up anything. You've now articulated everything I had vague misgivings about.
On the other hand, there's Qoheleth. "Say not thou, What is the cause that the former days were better than these? for thou dost not enquire wisely concerning this."
In section III, I can see a potential case for net zero moral change over time. Say humanity is in a blind-men-and-the-elephant scenario. Each generation is partly right about something and really wrong about something else. Each younger generation figures out what the older ones were Damningly Wrong About. And each older generation sees that the younger ones are Damningly Lax About Something Else. But since the older generation always writes the books and editorials, everyone gets the impression things are declining, and no one notices it's net zero.
Anyway, that's the best I can do for MG. Back to editing your post for junior highers...
> 100% of the HDI was within the ROPE
This is the Bayesian version of a significance test.
ROPE = region of practical equivalence, in other words a small area around some parameter value which are considered "close." HDI = highest density interval, in other words the measured value of that parameter, but because it's a distribution, they look at a range of values. In particular, they look at the X% of the distribution which gets the most probability into the smallest range; according to https://easystats.github.io/bayestestR/articles/region_of_practical_equivalence.html, X=89 is standard. What it actually means is "there is a high probability that the measured parameter value is very similar to the hypothesized one" which in this case I'm guessing is 0.
Yes yes: X% is the percentage for the interval to be considered for decision rule. But at least as important is the definition for the "closeness" ie the width practically equivalent to 0 region around 0. How wide is it? Very context dependent.
I think ROPE is suffering same fate as p-values in NHST, in that nobody properly understands it. The key definition of what is considered "practical equivalence" is buried as a side note.
The linked article claims that +/- 0.1 standard deviations is standard for defining ROPE, but I don't know where this comes from or if it's actually common.
"Is this because their methods are too weak to notice not just the improvement in gay rights over the early 2000s, but the improvement in African-Americans’ condition since the 1950s?"
I know this is snarky, but it is almost crime-think to notice that the treatment of African-Americans has improved. All the disparities that existed in the 1950s still exist, with the percentage disparity not that much different. Since poor treatment of African-Americans by American society is the only socially acceptable explanation, the only logical conclusion is that treatment must still be very bad.
> 1949
the moral decline was set in stone by 1948, thats when orwell published his political predictions
"The r^2 statistic of the graph above is listed as -0.006."
r^2 shouldn't be negative. It looks like the coefficient listed is -0.006 and the r^2 is .008. But this is still strange; the r^2 should be much higher. Even with random data at these sample sizes r^2 should be higher than the values listed for the various surveys in Table S3. Does anyone have an idea what's going on? The numbers are so strange that I almost suspect this may be a data entry error?
Thanks for this correction.
Do they not ask a time scale at all?
I think morality rapidly declined when corona hit and several tyrannical impulses kicked in; then some 5% of the population grew a backbone and its in decline again; I think we are better then the romans, but not so sure about the 90s.
100, 10 or 1 years would seem prudent here.
So, um, this is a new one.
So I did what I do with all papers now, I go check to see if the data and code is publicly available, and it is, so bravo on that, but...um, this is just code vomit, for lack of a better term. Like, look at this (https://osf.io/tv5jr), it's ~100 random files created over three years, everything from R code to csv to user/...../source/prop, whatever that is, to powerpoints, and there's an R file which I think just makes one image, "data and code_upload/figure2_code.R", which is just...
Do, do people actually work like this? I mean, I've scanned through this paper, it's not that complicated, I've seen entire data pipelines without this many files. Again, people have different workflows but...but this is really outside anything I've ever seen.
And I'm torn because, on the one hand, I think publicly posting your data and code is one of the most important things academics can do and I want to applaud it here but, also, the entire reason to post your data and code is so other people can double check it and...you can't do this here. I'm not reading all these file and trying to figure out this nightmare workflow of what should be, honestly, like a dozen csvs and a single rmd file.
And I can't tell if this is, like, just a horrible workflow or malicious compliance or what but, like, this is as close to posting your code without posting your code that I can imagine. Or am I missing something?
Maybe one of these days a prestigious journal like Nature will start imposing peer review for the actual methods, the statistical analysis code, not just the written paragraphs to describe it in the main text? One can hope.
Maybe we should have a new kind of journal: Instead / in addition to grants that give researchers money to do research, they would grant the funds on the condition that part of the funding, earmarked aside, is used solely to pay for the statistical review (including programming code review) by an independent person (maybe assigned by a reputable publication organization that may look like a journal).
Here’s my quick take on this. For context, I’m an academic in the physical sciences. Pretty much all of my papers are done in collaboration with *students*. They are doing this project/writing this paper as part of their graduate training/education. I personally am very picky about how they organize data and code, and they are still terrible at it. I’ve concluded that they have to do it poorly to learn how to do it well. It is very annoying to me as I have to deal with stuff like you describe every day. It thus takes a lot of work to organize student’s data. It takes even more work to make it useful for external consumption. And researchers have little incentive to make it so.
Yeah, despite my own annoyance with this as a student for my own work, I'm still afraid to look at some of my earliest "work".
You have to also consider the time pressure and incentives, several times I was still stubborn about this, but then got bad grades as a result of getting only a fraction of work done. (It of course helps that you get faster with practice.)
The moral drift seems to be a bit quicker these days.
What personal scriptural deviations disqualify someone from a pastoral role in the liberal branches of mainline Protestantism these days? Obviously fornication is non-disqualifying, nor is homosexuality, nor is atheism (at least in the United Church of Canada). The Wesleyan Quadrilateral has added a few more sides since my confirmation days, what with respectability and politics becoming the larger sides of the polygon.
Imagine thinking morality is a useful category.
a 0.006% annual change compounded over 50 years is 35%, according to my HP12 calculator.
1.006^50=1.349
or, more to the point, you can make a 35% change over 50 years look trivial by annualizing it to 0.006%.
1.349^(1/50)=1.006
You're off by two orders of magnitude there. 1.006 corresponds to .6%, not .006%
A .006% change would use 1.00006 as the base for exponentiation, and results in a change of .3% over 50 years.
Many Thanks! I was about to point out the same thing.
Here's a thought: is it possible that objective measures of "morality" (in particular, crime rates) can go down even though people are no more or less moral than before, because we have managed to outsource lots of moral duties and quandaries to the state, or solve them technologically? In previous centuries, it was a moral duty for men to carry arms to protect their families and their honor. This has become moot, even anachronistic, in places where you can call the police and expect it to protect you. In previous centuries, it was considered the duty of fathers and brothers to watch over the chastity of their female relatives, because an unwanted pregnancy would have been disastrous. With contraceptives, abortion and paternity tests, as well as economic independence for women, this has become a lot less urgent. Generally, we have managed to arrange our lives such that we need to make fewer hard decisions, placing less strain on our moral muscles.
Is perception not reality despite all the noise that emanates from the media that seems to exploit fear with the frequent telling of this or that bad event? The tolerance of theft that in past was prosecuted clearly would suggest to a great many that morality is less today than in the past; and in some cities like NY, fear of traveling on NYC subways.
And yet. https://www.reuters.com/legal/former-marine-due-new-york-court-indictment-jordan-neely-killing-2023-06-28/
you are entitled to your opinion and most I know who live there will avoid the subway system at all costs. Perception is reality. Your data point hardly determinative...
I know other people have said this before, but Livy is *right.* The Roman Republic had previously been one of the most functional-at-not-having-civil-wars states in the Mediterranean, thanks in part to Traditional Roman Morality - not the murdering women part, the ridiculous and implausible devotion to the laws part that was regularly celebrated as the chief element of the Roman state's success. That collapsed in the generation of Livy's parents with the Marius and Sulla wars and reached a nadir in his own generation with the massive, bloody civil war that preceded the transformation of Republic to Empire and the de facto return of the Kings of Rome.
I doubt he regarded being at war with someone else as indicative of immorality.
I think the distinction between civil and foreign wars is legitimate! As TGGP says, I think he regarded "Rome conquers its enemies" as a sign of greatness; "Romans kill each other" as a sign of moral decay. A state that conquers other states is evil-but-functional (I don't think Livy would include the evil), a state that fights itself is dysfunctional.
And it's not as if the Rome of Augustus thought that conquering its enemies was bad, or wrong. It's just that when Augustus tried, it mostly didn't work.
I'm less convinced the problem was that Rome couldn't function without being at war, as opposed to Rome's magic civil-war-not-having powers breaking down when the ratio of citizens to noncitizens with fighting skills got too low, which would inevitably happen if it fought lots of wars without major political reforms.
(And the wars, I theorize, were mostly a matter of war heroes getting elected and reelected and celebrated and honored and getting to loot their victims and seize all their land - the state didn't need war, it just really, really wanted it.)
Oh boy. Well, you did cover most of my own issues with it.
(Still, what's up about this talk about "other countries [than the USA]", that makes many of these assumptions even worse ?? What about India and China which alone (today at least ?) make up half of mankind ? What about the Muslim world and their very different morals ? What about the fall of the URSS ?!?)
Speaking of footnote 5 and the final bit about wealth :
Well, Livy seems to have been roughly correct : in terms of state capacity, demographics and economics, the Roman empire peaked around the first century !
https://acoup.blog/2022/02/11/collections-rome-decline-and-fall-part-iii-things/
(Under the assumption that declining """morals""", whatever he meant by that, would cause ruin.)
Here's another example though from (late) 1rst century (that I just read today!), about a very different (but is it?) dimension :
https://www.ecosophia.net/the-destiny-of-disenchantment/
And I'll have to disagree about the increasing "wealth", I'm not sure what is your definition of it, but sounds like it doesn't include the consumed natural resources nor increasing environmental damage, and because a lot of people indeed don't do that is why we are still careening faster and faster in the direction of ruin, despite having been warned about it more than half a century ago !
Is there a bias that applies not just to morality, but to other aspects of civilization as well? For example, in https://www.lesswrong.com/posts/DoLQN5ryZ9XkZjq5h/tsuyoku-naritai-i-want-to-become-stronger, Eliezer says:
> In Orthodox Judaism there is a saying: “The previous generation is to the next one as angels are to men; the next generation is to the previous one as donkeys are to men.”
Or consider the renaissance writers, doctors, and scholars who considered their responsibility as preserving and passing on the superior and untouchable achievements of the Greeks and Romans. In several scholarly traditions--I believe including the aforementioned Judaism but also Chinese philosophy as well as ideologies like Marxism and Objectivism that flow from a single person's writings--you see this same phenomenon. And it's extremely common in fiction, covering not just morality, but domains like architecture, science, technology, proficiency with magic if it exists, wealth/economic development, medicine, etc.
Yes, sometimes this is belief is warranted; see https://slatestarcodex.com/2017/10/15/were-there-dark-ages/. But overall it seems like it points to a bias that isn't just about shifting moral standards (also, it seems unlikely to me that moral standards shifted so rapidly in ancient Rome). I don't know if "reverence for the old" is just its own whole black-box bias that evolution gave us, or if it arises from other biases and/or common tropes in human societies.
It used to be that tradition was the most potent cultural force, see e.g. https://slatestarcodex.com/2019/06/04/book-review-the-secret-of-our-success/, so this bias was plenty justified, like they usually are. These days technology has plausibly dethroned tradition in that role, so old heuristics aren't quite as accurate.
"I can’t tell you whether morality is increasing or decreasing. But a first stab would be to note that wealth is increasing. We might expect those virtues which wealth makes less necessary, like industry and chastity, to decline - and those virtues which wealth makes more convenient, like compassion and pacifism, to increase."
The Belle Époque/Edwardian Era was a prosperous and relatively stable (in terms of internal politics of the major powers) time in Europe and in Britain. It was fulminant with antisemitism, colonial depredations, and wretched exploitation of industrial labourers as opposed to compassion, and seething popular nationalism as opposed to pacifism, ending of course in the war.
Chastity did decline, to be fair.
I don’t even understand the purpose of this paper. I get that it essentially functions as a piece of activism above all else, but I don’t even get who this is for. If MG’s goal was to draw more attention to progressive policy issues, this method of doing so seems like a complete waste of time. Usually when conservatives talk about moral decline, they think of things like acceptance of drugs, sex/pornography, and violence. They then point to overdose/addiction rates, depression/mental illness, violent crime rates etc. as evidence that these things are bad. They will often tie this into church but there’s a lot of secular conservatives these days as well.
So if you’re a progressive who never bought into the conservative idea of moral decline, this paper just exists to reinforce that. If you’re an intelligent conservative and read this paper to gain their perspective, you’re just going to see “oh, they didn’t even include violent crime or drug overdoses” and rightfully dismiss it. If you’re an uneducated MAGA die hard who only reads headlines, the odds of you changing your mind because “studies show morality isn’t declining” are literally zero. So this whole project, which seems like took these gentleman a considerable amount of time and effort, was really just a colossal waste of time. I don’t even understand why stuff like this gets published. Don’t these people want to be impactful on the world, and that’s why they became researchers?
I dunno, people holding these sorts of opinions seem to have plenty of impact on the world, so they must be doing something right, by their lights that is.
Yeah I get what you’re saying. I guess my point is that their impact is based upon something independent of putting together elaborate cherry picked “research” like this. I think the appeal of what we call “wokeness” is purely emotional to most people who adhere to it. I know the “wokeness=religion” point has been beaten to death, but people are catholic for the same reasons they’re woke: it gives their life meaning and makes them feel like a good person. But you don’t really see the Vatican commissioning elaborate cherry picked studies on the effectiveness of prayer (although maybe they should). Or perhaps they do and I don’t know about it, but either way I just feel like that’s very unlikely to convert anyone at all.
That's mostly because Catholicism's legitimacy isn't based on allegiance to "science", unlike that of wokeness.
You’re right.... it’s just really disappointing to be honest. I don’t understand why these guys wouldn’t want to spend their time and grant money actually trying to figure something out. Then it goes through peer review and gets published by top journals. I think I speak for a lot of people when I say I went from fully believing in academia, to viewing at an institution that was “obviously biased but still credible”, to now pretty much treating new social science findings like Bigfoot sightings: “there’s almost certainly a more logical explanation here so ill dismiss until proven otherwise”. Even in “hard sciences” these issues persist, though not quite as susceptible to the ideological bend.
If you set off to actually figure something out, Problematic Implications might turn up, and then your goose is cooked. I'm not sure how conscious those academics are about risks of this sort, but, like they are fond of saying, it doesn't matter much compared to systemic issues.
The study sounds like bullshit. I have no clue if a real moral decline is in progress, but my intuition is no over the time scales I care about, but I tend to care about pretty long time scales. I don't read the current news at all, but spend quite a bit of time learning about older history, and study pre-history and even cosmology as hobbies. I tend to reflect upon things like the sheer destructiveness and pointlessness of WWI, children working in coal mines, the fact that the French used to publicly burn house cats alive as a form of entertainment. The past sure seems to contain a lot of examples of utterly careless disregard for human life and literally no regard whatsoever for any other form of life. Many cases of flat-out revelry in the suffering of others.
In comparison, it's hard for me to really think it matters that people are more likely in some countries in 2020 to have sex before marriage compared to 1950. Maybe that has more bad consequences than good and we'll find out down the line it was a bad idea, but in the face of the larger general trend whereby humans far more broadly care at all that other humans in the world are suffering and try to do something about it, that we widely care about other humans aside from our own families at all, taste-based morals don't seem like they make much of a dent. They get magnified into wedge issues and end up hotly debated because they're the only point of divergence we've got and the middle children of history feel like they need something to argue about, but they're not really that important.
As for decline in trust and general feeling of safety, I don't doubt that's quite real, but also a natural consequence of humans encountering far more unknown strangers than ever before, in part because population has increased and thus population density has increased, in part because humans are more mobile than ever and living elsewhere than where they grew up, and in part because of omnipresent telecommunications making it seem like we're surrounded and threatened by far more people than are physically near us. Other humans aren't any less trustworthy on average, you're just more likely to encounter someone who is untrustworthy when you encounter 15,000 unique people a year compared to if it was only 50. And those people might even actually be more likely to try to exploit or deceive you, not because they're worse people, but because you're both strangers and they're less likely to suffer any consequences than if you were part of a small community where everyone knew each other.
Social media tries to simulate this kind of thing with call outs and cancel culture, but it's overwhelmingly for stupid reasons, not anything that makes a material difference to any modal person's feelings of trust and safety.
Did some quick reading on "HDIs within the ROPE".
HDI = Highest Density Interval = a range of values containing some large fraction of the posterior. Used to characterize the range of values that look plausible after Bayesian analysis.
ROPE = Region Of Practical Equivalence = a distance close enough to some value to be "basically the same". Used to let you do null hypothesis testing in a Bayesian framework.
"HDI within the ROPE" = almost all of the density of the posterior is so close to 0 (or whatever null) that it's basically just 0.
So what posterior are they calculating? What *parameter* has "HDI within the ROPE"? Here's their methods on study 4:
"We fit a linear model for each survey. The year of each survey was always entered as a predictor and the outcome was always the average perception of current morality. We used R^2 values as a measure of effect size. We fit Bayesian models using the Rstanarm package in R[33] and extracted the percentage of the 89% HDI that was contained in the ROPE, which was by default defined as ±0.1 standard deviations. We used the package’s default Markov Chain Monte Carlo and prior settings (M = 0, scale of 2.5)."
That's... not actually very helpful? They don't say what model they're fitting to for the Bayesian analysis, and I don't understand what the ROPE is +/- 0.1 standard deviations OF. I'd look at the code to figure this out, but it's behind a "request data" wall, and I'm waiting on the request with no idea if it will be granted.
Still, I'm a little alarmed at Scott's treatment of statistics in section II. The reason we adopted statistics in science was to prevent people from looking at graphs and saying "this data obviously shows a trend, you can see by the lines" when the data doesn't, in fact, show a trend. Turning around and saying "if the statistics say there's no trend there, then I don't trust statistics" is a red-flag move -- it could be the right move, given a strong enough data trend and a mysterious enough statistical test, but if you find yourself making it then you should halt and consider whether you should catch fire.
...is this data strong enough that we can ignore the statistical test? I wouldn't say so. We see a bunch of mean values at each time with no hint about the range or distribution within each summed-up point. This could easily be a bunch of noise. If I collected a dataset like this (I'm in biology), I wouldn't QUITE throw it out as noise without doing any statistics on it at all... but it'd be close.
I'm also a bit concerned that we're cherry-picking ONE graph to stare at out of THREE different charts at the linked source and TWELVE questions assessing general morality/trustworthiness used in the original paper.
"Still, I'm a little alarmed at Scott's treatment of statistics in section II. The reason we adopted statistics in science was to prevent people from looking at graphs and saying "this data obviously shows a trend, you can see by the lines" when the data doesn't, in fact, show a trend."
I'm with you here. It may be a bad study and inappropriate use of data, but the way we handle that is with more rigor not less.
Another theory (unrelated to this data) is just simple moral pluralism. Society is FOR SURE getting more "cosmopolitan".
Things could be getting better and better in aggregate, but also since people's individual sense of morals are getting more and more diverse, there is less and less sense other people are moral (because they don't match your standards and care about other things).
Also on the Livy thing, or really any "hey the Romans thought they were in decline" comment. The Romans often were in decline. There were a lot of ups and downs, and the peak of their geopolitical power didn't necessarily match up very well with their peak of civic mindedness and personal virtue.
This exemplifies a concern I have about the increasing tendency of traditionally scientific journals like Nature to publish hot button social science papers. It's easy to load value judgements and controversial philosophical positions into such papers, even without dubious intent, and present them as empirical, scientific findings. While it's a nice dream, to do truly scientific social science, I suspect this turn will likely do the opposite and let the problems of social science corrupt otherwise scientific journals rather than making academic social science more scientific.
On an institutional level at least, the world has become much more moral during the last decades. The ball started rolling with the 1945 agreement on the Universal Declaration of Human Rights, first in the so-called West, but increasingly everywhere. For example, the US cannot any longer sell arms to Nigeria to quell Boko Haram, since the Nigerian government cannot offer the type of guarantees that no violations of Human Rights will take place, that US weapons manufacturers need for their paperwork. (While on a mass level, the woke-phenomenon indicates that we live through a historical period ideologically dominated by a normative signals arms race, i.e. signalling-higher-morality-than-you.)
The victims of Boko Haram might disagree with your analysis of the morality of that particular issue.
Yeah well...my point was only that human rights-based moral thinking is increasingly constraining policies everywhere. I believe this is an empirical fact that can be documented. Whether or not this is a good or bad thing (your question) is an entirely different question. My opinion, for what it is worth, is that the effects of this increasingly "morally constrained institutional decision-making" varies. Plus, it is likely to vary depending on your deeper moral outlook: Deontology (do what you perceive that you are morally duty-bound to do, with no regard for eventual negative side-effects of your decisions); versus utilitarianism (aka the ethics of consequences): In this case, giving weapons to Nigeria to do extrajudicial killings of Boka Haram (plus its local competitor, the Islamic-State Nigerian offshoot) may have better long-term outcomes than making it difficult for the government (plus vigilante groups affiliated with the Nigerian government) to do such killings - since this may, in a worst-case scenario, result in the disintegration of the Nigerian state and a descent into chronic civil war. (If this is a real risk is an empirical question, where you have to assign probabilities to different outcomes. Not easy, but not impossible either.) My empirical point in this context, though, was only that in practice human rights deontologically constrain policy-making to an increasing degree.
You believing that "human rights" is anything other than a cover for the US/NATO to impose its will on the world discredits your entire comment.
?
I'm saying that the concept of human rights in the real world has been nothing more than an opportunistic device for the US/NATO to pursue their global hegemony.
You misunderstand my comment. I was making an empirical point, not putting forward any normative statement or normative hypothesis.
Concerning your normative hypothesis: Like all conspiracy-type hypotheses is it hard to falsify, but it weakens the hypothesis that the US has been more reluctant than most countries to ratify Human Right Conventions or Covenants. For example, the US is one of only six-seven UN countries that have not yet ratified the Convention on the Rights of People with Disabilities. And the US is the only UN country left that has not ratified the Convention on the Rights of the Child.
One can always launch supporting hypothesis to explain why a country that allegedly "uses Human Rights as an opportunistic device" abstains from ratifying HR Conventions and Covenants itself. But Karl Popper rightly advice caution in putting forward supporting hypothesis to save an initial hypothesis that face contradictory empirical evidence, as that threaten to make the initial hypothesis immune to falsification.
Be that as it may, I do not wish to go into a further debate concerning possible conspiracy motives related to HR, since my point was empirical, not normative.
I'm tempted to link the quadrupling of male dropouts to skyrocketing marijuana consumption but, alas, trying it in college or at a party a couple times hardly qualifies as "use".
Although I'd appreciate the company. I suspect our adult population of tokers is closer to 10%.
I would not be surprised if 1/3 of US adults used marijuana in some form at least once a month. (I also wouldn't be surprised if the proportion was smaller, but just 10% would be shocking.)
I don't think it's true that science advances funeral by funeral, but morality very well might. I don't think it's a coincidence that there wasn't a civil rights movement for American blacks as successful as the one led by Martin Luther King Jr. until after the Confederate veterans of the Civil War were all dead.
This is one of my few worries about radical life extension: it's a lot easier to raise a child not to be a bigot than it is for an adult that is a bigot to stop being one, and you can say the same thing about other changes in people's individual morality. What other attitudes would someone born in 1800 have to change before they wouldn't be considered a moral monster by today's standards?
Moral decline seems like a pretentious phrase for “the kids these days, I tell ya.” It seems mainly that the moral standards of the prior generation becomes simply irrelevant and replaced with different standards.
"There are people in every time and every land who want to stop history in its tracks. They fear the future, mistrust the present, and invoke the security of a comfortable past which, in fact, never existed."
Senator Robert Kennedy (the real one not his son)
It seems to me there's something wrongheaded about trying to figure out whether people believe morality has declined. The effort presumes that this question, as formulated, is one that people think seriously about, and have definite views on. I'm not at all sure most people do. There are lots of big general questions like "is morality declining?": what makes for a happy life? is it important for kids to be exposed to the arts? what kind of landscape is most beautiful -- seascape, meadow, mountain view, other? are some animal species happier than others? which sport takes the most skill? is gambling completely pointless? will our species reach the stars someday?
I think most people do not have a view about most of these questions, though of course you can get people rambling on one of these subjects, and if you insist they fill in an answer or a rating of each on a survey people will do it. I for instance, do not have an opinion about whether morality is declining, and if pushed to give a detailed and honest answer I'd start by saying that I don't know what, exactly, I'd consider to be components of morality. A few things people do seem clearly bad to me, a few seem clearly good, and most of the rest seem interesting and complicated to me and when I think them over I am not very likely to be asking myself whether they are ethical or not -- I'm asking myself other questions. It's not uncommon to find someone who has a definite view about *one* of these questions, because it's of personal importance to them. I expect that most astrophysicists have a view about whether our species will reach the stars. But I really doubt that I'm unusual in being someone who does not walk around with an Opinion of Trends in Morality meter in me someplace, or a Skill Level Required by Different Sports table.
> They mention in the text that these kinds of questions did better than others; 50% report improved treatment of gay people. But what are the other 50% thinking??!
I'm not sure how closely you quoted the actual poll questions, but the wording in your post asks whether "things have gotten better", not whether "gays are being treated better", so I could imagine someone who thinks we have TOO MUCH wokeness today might answer "worse".
Most serious expositors of moral decline (or, more broadly, civilization decline) posit trends on the order of centuries, not decades. Moreover, they often hold that, over enough centuries, you see a sinusoidal pattern, with rises alternating with declines.
Thus, there is zero contradiction is holding that we have long been in a secular decline, that Livy, writing in 1st century BC, was too, and that there have been rises in between.
I was with you until section III, where you started bringing "conservative values" into the equation. Yes, it's true that if you focus on the subset of moral values that were considered important by people in the 1940s but aren't considered important by people in 2020s, then it will seem as though morality has declined over the past 80 years. By the same token, you could just as easily argue that by the standards of 2020s progressives, morality (i.e. opposition to racism, sexism, and homophobia) has actually increased over time! But these conclusions don't say anything interesting, and they don't really answer the question except in a frustratingly narrow sense. All it really amounts to is an acknowledgement that some of society's dominant values change over time, which is obvious to anyone who's even mildly familiar with history.
To actually make the question worth asking or answering at all, you need to focus on the moral values that *don't* change, the ones that transcend the political divide. Look at the things that people in the 1940s *and* the 2020s would both agree are good, and see if people are doing those things less. Look at the things that both conservatives and liberals would agree are wrong, and see if people are doing those things more. That's precisely what the studies you quoted were trying to do (both 40s conservatives and modern progressives would agree that violent crime is bad), so criticizing them for ignoring partisan and period-based standards of morality is completely missing the point. Not including those standards isn't anti-conservative bias, it's just neutrality. (If they'd really had a progressive bias, they could've just compared the number of Black, female, and openly LGBT Senators in 1940 vs. 2020, and then claimed the dramatic increase as proof that people were actually much more moral than they used to be.)
The study includes this poll question as an indicator of morality:
"Compared to the past, have things gotten better, worse or stayed the same [regarding] treating gay people with respect and courtesy? (2002 vs. 2013)"
How is this a moral value that doesn't change and transcends the political divide?
Also, moral values that change are still moral values! You can't declare someone born in the 1940s wrong for saying morality is declining if they're correctly perceiving declines in values he cares about, just because people in the 2020s don't care about those values.
Okay, here's an example: Let's say a devout Christian is trying to convince people that society keeps getting worse as a result of the ongoing decline in religious beliefs. If someone asked him to provide evidence for that claim, and his reply was "just look at how church attendance rates have dramatically plummeted," that would be a poor argument, because no one who isn't a devout Christians themselves is going to care about church attendance rates. It's only convincing to people who already agree with him! If he wants to make an argument that's convincing to anyone else, he needs to appeal to shared values that aren't exclusive to Christianity, and prove that *those* values are also declining as a result of secularism.
My point isn't that conservative moral values aren't "real" values (whatever that means). My point is that, if you want to make a case that doesn't boil down to the tautology of "the decline in conservative values is bad because I personally like conservative values and think it's bad to have less of them," then you need to ground it in some value system that both conservatives and non-conservatives can agree upon. Otherwise, it's not an argument, just a lamentation.
The opposite study would not be taken very seriously, I imagine. "We find that polls consistently show that people think that the poorest among us are being more and more taken advantage of. We show however through objective metrics that the average inflation adjusted income of the poorest among us from 1949 to today is in fact ... . This is important because a prolonged false belief in economic decline can redirect scarce resources away from the pursuit of spiritual endeavours which - given our position in Mazlow's hierarchy - are those of greatest importance in the present era".
TBH (and I know this is a total tangent, but whatever), I think Maslow's hierarchy errs in putting physical needs as more fundamental than spiritual ones. Maybe at the literal subsistence level ("Am I going to starve to death?") they are, although once you get past that point, people who feel their life is meaningful seem generally better able to put up with poverty and physical deprivation than rich people are able to put up with a sense of meaninglessness and ennui.
100% agree.
The point about the specific morality of different periods is valid, but also points directly to how "morality" is being changed. This is indirectly alluded to in noting that various standards and behavior concerning racial bias, religion etc have lapsed but it does not automatically follow that modern "moral" practices are superior. They are just different.
It does also appear - very obviously to such as myself or those who similarly do not share PMC "morality" - that said modern standards are defined as being inferior to the past.
A simple generic example would be marijuana use. While I personally don't care about marijuana use one way or the other - the fact is that the laws governing its use have been under assault for decades and the 49% referenced are *all* breaking federal law still.
Yes, people drank alcohol too when it was illegal by Constitutional amendment 100 years ago - but they didn't pretend it was moral.
Is there a reality of social scientists talking themselves stupid? This is a good example in a long litany clearly and obviously silly work being published. I truly see it as some of the most base and clearly biased nonsense which is akin to a toddler whose face is covered in cookie crumbs insisting that it isn’t and they don’t know what happened to the cookies.
The nuclear article opens with a similar set of conceits about the apparently super powerful anti nuclear activists who stopped a giant industry in its tracks. It just sounds so dumb when thinking about all the other ways in which everyday activists have failed to stop powerful industries, even after clear and large scale harms have been inflicted on them and their children. When we engage in intentionally obtuse and extreme thinking in isolation about observing anti nuclear activists, dropping public approval polls, and the failure of the nuclear industry to expand…then it is obvious to them how a story shapes up of ultra powerful activists who the government listened to and enacted laws to do what they wanted before anything bad happened to anyone. As if!
Except you know…how this almost never happens in any other topic. Could it be…nuclear actually is unsafe?! Hard, expensive, impossible to ensure commercially, and involves extreme transportation risks if larger amounts of material were being moved around? Hmmm, this thinking in isolation is truly absurd. I call this taking themselves stupid. This is the same bone headed nonsense sham logic which calls for and demands published studies on topics like animal intelligence or if babies feel pain. When any mother or anyone who works with animals can tell you the answers to these obvious questions. Nope, get lost thousands of years of experience, the real arrogance of the self appointed experts are now on the scene to ‘study’ things.
Yes indeed if you check in with common people, they can be trusted and are correct about their own lives and beliefs. This gating off of knowledge behind ‘expertise’ is absurd. The intellectual crime of credentialism and incredulous attitudes towards anything outside their own contrived orthodoxies is a plague on progress and a waste of resources.
I don't know whether people are behaving worse. I do think that there are more people speaking against conventional morality than there used to be.
I've tried to edit this twice, and substack has immorally made the comment disappear.
Security has costs. Locking and unlocking a door takes time repeatedly. Losing a key or other similar failures takes more time, and it's unpredictable.
In cold climates, people seem less likely to lock their doors, presumably because they don't want to leave their neighbors outside to freeze.
When I lived in Newark, Delaware (a medium-sized college town), there was a while when, if I mentioned it, people would get angry at me. I'm not sure why, but I think it was because they didn't want to hear me complaining if I got burglarized.
I write my long comments in google docs or a note that i then paste into the comment section.
Interesting comment brw
Good idea. I just found that trying to edit a comment makes it disappear. Note that this issue is about editing comments, not about writing them the first time. Do you save comments when you edit them?
As i said i just write them and edit them in google doc before i post them here usually
I dont think i ever edit comments here
Thinking about the connotations of "moral decline", shoplifting has become an organized crime project rather than just individual decisions. Shoplifting has become a much more serious problem, but I don't think people think of organized crime taking up shoplifting as moral decline. Or do they?
I think of shoplifting specifically as more of a legal than a moral issue. A lot of the places where you see these organized shoplifting rings have de facto legalized shoplifting, and have de facto (or even de jure) illegalized the defense of property with force. And/or don't have the law enforcement resources to address the problem.
Which is why, with perhaps the exception of a few violent offenses, I'm skeptical of crime-as-proxy-for-morality. A society where police vigorously pursue shoplifters and every other shopkeeper has a shotgun under the counter is going to have a lot less property crime than a society where those things aren't true, even if the two populations have identical moral character.
I don't see how two societies could have the same moral character but different rates of shoplifting. Someone who chooses to steal is exhibiting worse morality than someone who doesn't choose to steal, regardless of whether their choice is influenced by the goodness of their heart or the shotgun behind the counter or the fear of post-mortem hell.
Shoplifting as part of organized crime takes fewer people to cause more damage if that matters.
As with everything here, this quickly devolves into a semantic argument about what morality is. But no, I don't draw much moral distinction between a person who steals and a person who wants to steal but only doesn't because they fear punishment. The consequences are different, but the person who is deterred from stealing is no better on a personal level.
As, e.g., http://www.fairlynerdy.com/what-is-r-squared/ explains, R^2 can be negative. The Supplement Table S3 reports 3 of the 107 items with a negative R^2:
p 54 item 79 General Social Survey R^2 = -0.004
p 61 item 104 European Social Survey R^2 = -0.00006
p 62 item 107 European Social Survey, R^2 = -0.0001.
In the article on p 2, an adjusted R^2 = -0.002.
But given the context, it seemed worth checking on the "-0.006."
[By the way, the paper was cited by Conor Friedersdorf at https://www.theatlantic.com/newsletters/archive/2023/06/the-battle-over-smartphones-at-school/674338/. Perhaps this has already been noted.]
Marriage has declined.
Fewer people get married. They get married later. And it ends in divorce far more often.
In 1950, what % of children under the age of 18 were living with both married biological parents? What is that % in 2020?
This plus probably the #1 thing people notice. There is no fudging it.
Even those social classes that partially reversed (but didn’t completely repair) the divorce rate did so mainly by delaying marriaige and having well below replacement fertility. People in 1950 were having 2.5 kids young and still keeping it together.
If the lack of chastity wasn’t correlated (causing?) the lack of stable marraige people wouldn’t care, but many see them as linked.
Sure, but people putting up with horrendous rates of spousal and child abuse instead of being able to leave is a societal level morality failure too.
About 1/3rd of divorces are a result of abuse. These were pretty easy to end in the "fault" era as well.
About 2/3rds of divorces aren't the result of abuse. These were enabled by the "no fault" era.
Child outcomes match this pattern. If the divorce ended abuse, it improves child outcomes. If it didn't, it retards child outcomes (also, divorce tends to end in abuse from boyfriends).
https://fantlab.ru/autor74643
Скотт Александер (Scott Alexander)
The last part about peoples standard being what was imprinted on them is very interesting to me
A question that poses me is; how does people change standards sometimes? I know some people do. Is this just very uncommon?
No, it happens in our lifetimes. Take support for gay marriage in the US. It flipped in a span of 10-15 years, and it’s people that used to be against coming around, not just the old guard dying of old age.
I think we are quite a bit more moral now than 1977, the year of my birth. I have to imagine that lots of my generation would agree.
> They mention in the text that these kinds of questions did better than others; 50% report improved treatment of gay people. But what are the other 50% thinking??! The answer has to be something like “2002 to 2013 is too short a time to measure even extremely large effects that were centered around exactly this period”.
Well, presumably some of them are thinking "the amount of 'respect and courtesy' that must be extended to gays now is well in excess of what would be appropriate, which is a change for the negative compared to the past". Increased respect and courtesy aren't automatically good things. The question was whether things have gotten better or worse, not more or less courteous.
The last paragraph in their discussion is interesting. They discuss how the illusion of moral decline could lead to what, for lack of a better term, seems to be in their eyes actual moral decline “If low morality is a cause for concern, then declining morality may be a veritable call to arms, and leaders who promise to halt that illusory slide—to “make America great again”, as it were—may have outsized appeal.”
Then they end by saying “ Achieving a better understanding of this phenomenon would seem a timely task.”
If their thesis is correct, then there’s nothing “timely” about this task- the illusion of moral decline has been affecting human society for more than 2000 years, including in the periods where we founded democracies, abolished slavery, and fought Hitler. If this illusion had been getting people to focus on imaginary problems and choose bad leaders, then it has been doing so throughout history , well before the 2016 US election.
“This phenomenon” - the one they went looking for and - surprise! - found? The conclusion doesn’t fit the findings, you say? Consider that this whole exercise was not a disinterested search for “understanding,” but confirmation bias masquerading as new knowledge. It adds exactly nothing to our understanding of human history but presumably the researchers get pats on the back and more funding. I’m sure there are more funds available when you make sure to tell your funders what they want to hear.
It's easy to notice what has gotten worse -- e.g., when it comes to spectacular crimes, there are more school shootings and other mass shootings than when I was a kid in the 1960s-1970s. On the other hand, there appear to now be fewer political assassinations, bombings, skyjackings, kidnappings for ransom, and bank robberies. But it's hard to remember what isn't around much anymore.
Growing up in Los Angeles, for example, RFK was assassinated at L.A.'s most famous old hotel the night he won the Democratic California primary when I was nine, the Manson Murders of Sharon Tate and friends were a few miles away when I was ten, and the LAPD burned down the Symbionese Liberation Army's house (but kidnapped heiress Patty Hearst wasn't there) when I was 15.
Was Los Angeles crazier around 1970 than it is today? I'd guess ... probably, but then who really knows? I'm not as entertained by the local news as when I was a kid, so I can't really compare fairly.
I'm a pretty young guy, and I was gobsmacked to learn that in the early 70s there was an 18 month period with 2,500 domestic terror bombings in the US. Casualties were relatively low, granted, but still...
Trust and tolerance are inversely proportional. As social tolerance for differences improves trust in society declines. Trust is a function of being able to understand what others around you are thinking, which is hurt by tolerance of differences of upbringing.
I am not sure that is true, or at least I think it leaves out important bits. Specifically, trust has a lot to do with doing what your say you will do. Impart I suppose that implies that people will say what they are thinking, but it also means they honor commitments made previously that you might not be present for. You can be very different from me in many ways, but so long as I think you will follow your word and commitments I can trust you.
Moral decline sounds like it fits: All the mature decent old people keep dying off and being replaced by these immature babies with no self-control. Been happening forever, kinda weird.
There has been moral progress due to advancements in science, technology and knowledge. There is less suffering today than in the past and more flourishing.
Re footnote 4 - That's kind of the point of Haidt's "The Righteous Mind" - liberals don't realize some people have more moral "flavors" than care and fairness. So they can ask people about morality and not think about other things like tradition, sanctity, etc.
It doesn't sound like a very good study, but I don't think the perception of declining morality has much to do with declining morality however it is "measured". There are two ways to interpret all those reports of declining morality for the last several thousand years. Nehemiah anyone? Morality can indeed have been declining since the Garden of Eden or Olduvai Gorge or many, possibly most, people in every age have had a sense of declining morality. That latter should not be a surprise. We are taught in every society to develop a moral sense of what is right and what is wrong. It's part of our training to live in human society, and I can't think of a culture that doesn't frame human behavior in terms of moral judgement. Despite this, no society has ever lived up to its moral precepts. People aren't easily programmable robots, and moral codes are always full of conflicts, contradictions and compromises. Thou shalt not kill, unless you serve in the military in which you might get a medal for killing or executed for failing to do so. That means, as people age and experience life as adults, they are exposed to a world far from that about which they were taught. Most people do follow their childhood moral precepts to some extent. Society would be much worse than it is without that. However, almost all of us wind up compromising on some points. Worse, many people violate those precepts but only some of them are punished while others thrive. If you don't have a sense of declining morality as one ages, you are either extremely well grounded or simply oblivious.
P.S. I was reading a study, "The Age of Anxiety Birth Cohort Change in Anxiety and Neuroticism, 1952-1993", which addresses anxiety in teenagers. "The average American child in the 1980s reported more anxiety than child psychiatric patients in the 1950s." More recent reports on the adolescent anxiety suggest that the trend has continued, though, I'll add cynically, with climate change replacing nuclear war as the big bad. I'm sure my parents were all relaxed and mellow when war broke out in Europe and Asia during the Great Depression and great^N grandparents positively euphoric in the face of the Revolutions of 1848 and American Civil. As with adults facing declining morality, could it be that adolescence is a time of anxiety?
Isn't some or other kind of Moral Decline the eventual fate of every civilisation? https://grahamcunningham.substack.com/p/invasion-of-the-virtue-signallers
>In the 60s, in the city center, they felt comfortable walking alone at night. Now, in the suburbs, still they feel comfortable walking alone at night.
Except the proportion of people living in cities went up since the 60s, not down. (Maybe suburbs are being lumped in with urban here, so I could be wrong about the urban/suburban split vs urban/rural split.) https://www.researchgate.net/figure/Percentage-of-US-population-living-in-rural-and-urban-areas-from-the-years-1800-to_fig2_320207578
ROPE and HDI are Bayesian statistical terms. ROPE = Range of practical equivalence. HDI = Highest density interval.
The gist of it is that you pick a minimum effect size that you would be willing to consider interesting, the authors of this paper chose +/- 0.1 standard deviations which I believe is the value Kruschke recommended for very conservative analyses (See the Kruschke paper from 2012 titled something to the effect of "Bayesian analysis is better than T-tests, you stupid losers" for more on this).
You then permute different values from posterior distributions of group means and sample standard deviations to produce a sort of ad hoc posterior distribution of effect size.
The HDI is the smallest portion of this effect size distribution whose integral is equal to 0.95. In theory (and if I remember correctly), the percent of the HDI that falls outside of the ROPE is roughly equal to the percent chance of a significant result.
The fact that the authors chose a ROPE of +/- 0.1 is a point in their favor since they're really setting the bar low for a significant result. However, this method still relies on a lot of steps that are susceptible to researcher bias and I'd have to read more of their paper than I have time to in order to get a good grip on how trustworthy their work is.
I will say, when I studied this method several years ago the first thing I said to the person reading the Kruschke paper with me was something like "Wow, somebody could definitely use this to bamboozle a paper reviewer into publishing nonsense one day if they wanted to."
Additional note. This method is also commonly executed through an R package that no one except Kruschke himself actually understands. I believe it relies on a quadratic approximation method for finding posterior distributions. I recall another person criticizing that approximation once in the past as well. I failed to understand the R package a long time ago, so I can't truly vouch for that. Just throwing it out there.