433 Comments
deletedJan 26, 2022·edited Jan 26, 2022
Comment deleted
Expand full comment

Is this a strong prior that many people have, re: decision making and poverty? I'd never heard of it until maybe 6-7 years ago, when NPR started touting it like it was Einstein's theory of relativity.

My gut feeling on why people want it to be true: a lot of us are psychologically wired for life in hunter gatherer tribes, which were egalitarian and redistributive out of necessity (e.g., you hunted and killed a pig today and I didn't; tomorrow our situations may be reversed, so we're both likely better off if we agree to share). As such, we are not comfortable with economic inequality; it just feels wrong, and we want to think of it as a problem that can be solved with the right set of social policies.

Expand full comment

Hunter-gatherer tribes weren't especially egalitarian.

Australian Aboriginal groups, for instance, tended to be heavily (but not strictly) gerontocratic... I say not strictly because old age was no guarantee of being an "elder". There were some tribes where all the women and girls wound up as wives of a handful of old men.

Expand full comment

Poverty causes low IQ scores mediated by stress sounds like an okay hypothesis, but given that children have many reasons to be stressed apart from parental poverty you'd think it ought to be easily measurable. For instance, kids who are bullied at school are far more stressed than kids who are reasonably popular, so there should be a strong correlation between popularity and test scores too.

Expand full comment
Comment deleted
Expand full comment
author

Previous studies which I'm also suspicious of had linked poverty to more low-frequency and fewer high-frequency brain waves. I think they're working from a model of something like more stress/worse nutrition -> worse brain development -> different brain wave pattern.

Expand full comment
Jan 29, 2022·edited Jan 30, 2022

It seems possible, but hardly a sure thing, that good/bad differences in brain development would manifest as a differences in brain wave patterns, but unless you have a model that predicts what kinds of brain wave differences you'd get from bad vs. good brain development, what sense does it make to measure EEG in this study? This study makes about as much sense as one that tests the effectiveness of a new antipsychotic drug by looking for EEG differences between treated and placebo schizophrenic groups. OK, so if the treated and placebo groups’ EEG’s are different, what do you really know? Only that the drug changes EEG’s, right? You’d get that result for a drug that changes brain waves and has no effects on psychosis. You’d even get it for a drug that changes brain waves and also makes psychosis way worse. So yeah, the drug’s Doing Something Real in the brain, but so what?

Back to the infants: Let’s say the parents of the infants in the high cash group but not the low cash group used the money to buy a bunch of crack and smoke that shit hours per day right next to the crib — I’ll bet you’d get some changed infant EEG’s there. (I’m not saying I think that’s how parents in the study likely spent the cast — this is just a thought experiment.)

Expand full comment
Comment deleted
Expand full comment

There's lots of theories on why this is the case: Nutrition comes up a lot (but we could test that independently, no need for the wealth angle), as does some generalized "kids who grow up in high-stress environments have worse brain function" idea. Again, feel like we could test that without the middle man as well. Probably the right move would be to figure out what we think the mechanism is first, and test if poverty increases that.

Expand full comment

The obvious case is people being malnourished/undernourished due to lack of money to buy (good) food. In this case more money for parent -> improved food for household including children -> improved brain function in children. But as walruss said this is really the concatenation of two hypotheses ("adding money to a representative poor person will result in their household getting better nutrition" + "better nutrition improves brain function").

Expand full comment

I'll say what I say about all of these studies, which is that poverty is obviously bad and we shouldn't have to slap some quantifiable label on it in order to justify doing something about it.

If you don't agree poverty is bad, you've never experienced it.

Expand full comment
author
Jan 26, 2022·edited Jan 26, 2022Author

I debated saying something like this, because the people I linked (Stuart, Andrew, etc) all included something like "obviously I'm still in favor of cash transfers and we should still all be against poverty".

I decided not to say this, because it felt icky. I shouldn't have to mouth agreement with the point of a false study in order to criticize it as false. It also feels like bad incentives - if people fudging studies in order to support a point causes lots of people to talk about how true that point is, then people will fudge their studies more often.

(this is similar to my policy of "when condemning terrorist attacks, don't mention that the terrorists' end goals are just". They might be, but talk about it any other time!)

I have defended cash transfers and anti-poverty programs in the past, and I'll defend them in the future, but I think it's important *not* to defend them in the process of reporting on how people use false arguments to support them.

Expand full comment
Comment deleted
Expand full comment

"You're not going to solve this paradox with better science."

Okay, keep pretending that everyone is inherently identical and then go through the mental gymnastics of creating little just so explanations for why your policies don't work and inequality doesn't go away.

Expand full comment
Comment deleted
Expand full comment

Well you can support direct wealth transfers as a kind of brute force poverty reduction, but this ignores all the various other factors at play. People in poverty are lower IQ on average, and IQ is negatively correlated with savings rate and and positively associated with likelihood of mortgage default, even after controlling for income. So the idea that you can force people not to be in poverty is fallacious.

Expand full comment
Comment deleted
Expand full comment

Poverty is a lack of money by definition, so you absolutely can remove people from poverty by giving them money. It's actually the quickest and most efficient way to eliminate poverty.

Expand full comment

"...you can support direct wealth transfers as a kind of brute force poverty reduction, but this ignores all the various other factors at play."

Sure. But so what? The transfers still reduce poverty. Are you proposing that we do nothing to reduce poverty until we've done a theoretical analysis that takes into account all those other factors? Even leaving aside that we don't know if that would lead us to something better anyway, I think we'd do a lot better overall to start the transfers now and improve things (if we can) later than to simply delay further.

Expand full comment

An anecdote (but from a place of good faith): I think some of what we see as IQ is an optimization that gets diverted into other areas that don’t show up on IQ tests. Worked with a guy once who was innumerate, ie could not count beyond about three. Was with him in a truck that broke down in the middle of New Mexico. About three hours away from the nearest town and no cell signal. Cracked axle I told him it was impossible to fix. I tried to find higher ground to get a cell signal. He took a chain winch and to this day I’m not sure how, applied enough pressure on each side toward the center that we could still drive (without disturbing the winch).

I also once worked in a call center -high school level education- where the phone agents figured out that if they applied for vacation in bulk orders in certain patterns they could get any time off they wanted regardless of built in blocks around demand. It was a hack based on a rule that if a certain percentage of your time was approved all of it would be approved. Then they’d just delete the time they didn’t want. None of them could program or say what an algorithm is, but they figured it out.

All of this is to say: I’m not sure poverty abs low IQ isn’t something like an optimization for “keep trying this method that’s really unlikely because you don’t have anything else in your resource bag and maybe you can make it work.” Which once you have resources becomes a failure to use those resources economically, take too long on a test, etc.

I’ve met dumb poor people. Just not a lot.

Expand full comment

"Direct cash transfers" sounds like the best thing we've come up with.

Surely better than "we'll educate them out of poverty" which has been the message for the past several generations, and as you say, we know that you can't fix that.

Expand full comment

"little just so explanations"

Expand full comment
Jan 26, 2022·edited Jan 26, 2022

I disagree with you on both whammies. At least since Calvin started teaching people their success in life was a sign of their salvation in the afterlife and at least since Jakob Fugger required paupers to be devout, willing to work and abstain from begging in order to access his social housing project, a huge part of political disagreement on social policy in the west has not been around the question, whether poverty is bad, but *for whom* it is bad.

The right thinks that poverty is bad but necessary as it is mostly the "deserved" consequence of bad and/or immoral choices and fears helping poor people too much will lead to wrong incentives that are worse than the poverty itself. The left wants to help all poor people regardless of desert believing a certain minimum standard of living is basically a human right.

Naturally, the area of compromise has always been around helping the "undeservedly poor".

Even the most stone-hearted Rand reader can agree that child poverty is undeserved. This is why education is one area, where despite of huge disagreements about the means, both sides are willing to invest heavily and can even to a degree agree on the standards that should be used to measure success. Unfortunately, if Freddie deBoer is to be believed, this whole approach is also hopeless.

Another route, namely coercing parents into making good decisions for their children or limiting the authority parents have over their children is also politically blocked, as conservatives and also many on the left will have none of it.

The path that remains open is to somehow help the children by helping the parents. The right might be convinced if this is actually shown to be effective. For the left this would be the dream scenario enabling them to help a group of people the right would otherwise never allow them to help.

Hence any causal pathway from helping parents to helping children is extremely valuable. Instrumentally, because it enables us to do something we all agree is good: helping children; and politically, because it enables the left to help "deservedly" poor people.

And that is exactly why this paper was breaking news in the New York Times.

Expand full comment
deletedJan 26, 2022·edited Jan 26, 2022
Comment deleted
Expand full comment

You seem to be assuming that working in fast food or for minimum wage necessarily makes you poor. Not so! A single person who works full time at $7.25/hr is above the poverty line. Two people who both work full time at the same wage can support two kids without falling below the poverty line.

The problem isn’t that fast food wages are too low, it’s a combination of people who for various reasons have a hard time sticking to a steady full time job and people who have kids before they are able to support them.

Expand full comment
deletedJan 26, 2022·edited Jan 26, 2022
Comment deleted
Expand full comment

I must point out that minimum wage and fast food jobs are far less likely to *be* 'full-time' jobs.

Bureau of Labor Statistics shows as of 2020 that out of workers paid federal minimum wage, 71% (175K) are part-time workers vs 29% (72K) full time workers: https://www.bls.gov/opub/reports/minimum-wage/2020/home.htm

(also the idea that being above the federal poverty line is sufficient to confirm one is Not Poor is a whole other can of worms)

Expand full comment
Jan 26, 2022·edited Jan 26, 2022

I do not care if the line is drawn such that $7.25/hour is legally considered "not poor". That doesn't reflect the actual meaning of poverty, that's just a political lie created so that the populace does not have to confront how many people in the so-called "land of plenty" are impoverished.

Above and beyond that- many people WANT to work full-time, but are not. Their employers have them work 39-and-a-half hours instead of 40 because then they can get all of the benefits of a full-time employee without any of the obligations. They will then ask someone to "cover a shift" so that they, in fact, get MORE than full-time labor without the commensurate compensation. You are very lucky if these, two of the most common anti-labor practices prevalent in the culture of low-wage labor, are completely unheard of for you.

Expand full comment
Jan 26, 2022·edited Jan 26, 2022

I should have made it clearer that my post was descriptive (and a hugely oversimplifying caricature at that) and not an endorsement.

My point was merely: if you want to a) achieve better outcomes for poor children and b) give poor adults (working or not) money, then research that points to a positive causal connection from b) to a) is extremely useful.

Firstly, because achieving better outcomes in poor children (or any children for that matter) beyond a baseline is very hard and we don't know how to do it reliably; and secondly, because you will never sell b) to right-wingers on its own merits, but you may have a chance if it leads to a). Granted, you might not convince Peter Thiel, but you only need a small percentage of the more centrist voters.

Your other point I understand as saying that basically improving (educational) outcomes in children achieves nothing, as the economy is a zero-sum game where somebody has to do the low paying jobs. I disagree with this too. It's all about comparative cost advantages. If educational outcomes between rich and poor kids can be narrowed, this means that there are more jobs available to the poor kids. Wages for low-paying jobs would increase or those jobs would vanish (actually, I suspect that there are way more jobs that "the US could work without" than most people think). Inequality would decrease. A country's GINI-coefficient is not fixed for all eternity by some sort of economic law.

Expand full comment

"You can tell every individual fast-food worker to boostrap their way out of making minimum wage, but the end-result of that is that there are no fast-food workers left."

I think "eliminate all the boring, low wage jobs" is what we've been charing towards en masse for the last century.

Expand full comment
Jan 26, 2022·edited Jan 26, 2022

I don't think the person you're replying to is right, but

> You can tell every individual fast-food worker to boostrap their way out of making minimum wage, but the end-result of that is that there are no fast-food workers left.

is just a silly argument. Even it were possible, they weren't all about to simultaneously change themselves into architects next Tuesday. We'll keep on getting new fast-food workers who haven't changed into architects yet.

Expand full comment

>You can tell every individual fast-food worker to boostrap their way out of making minimum wage, but the end-result of that is that there are no fast-food workers left.

If we could effectively cause this to happen, it would not result in no fast food workers. It'd result in fast food worker wages going up because supply for such workers would diminish while demand would increase (due to the ex-fast food workers being now able to afford to eat out more often).

Expand full comment

>"The right thinks that poverty is bad but necessary as it is mostly the "deserved" consequence of bad and/or immoral choices and fears helping poor people too much will lead to wrong incentives that are worse than the poverty itself.

From what I understand the argument is also very much that the wrong incentives are not themselves bad, but lead to more and perpetuate poverty in the end. But since the welfare state is extremely old at this point, it cannot be logically consistent to assume that it is mostly "deserved" consequences.

It is imposed consequences from previous (or current) attempts to fix the very problem, that cause people to be born into bad fortune.

We are in hell and got there thru flawed policy, dictated by our (mostly) good intentions. And we are very stupid and generally don't learn from previous mistakes.

(or probably institutions act stupidly, because of public choice theory they are incapable of learning, because *mumblign something about* incentives again)

The welfare state never shrinks (as government programs never do) and always increases (because good intentions...).

Also you cannot cut government programs ever (it simply does not happen).

And trying to cut the welfare state specifically is impossible, as you would be accused of having bad intentions. (and given that the welfare state is as massive as it is, you would have to do incrementally peel it layer by layer and each one breaks a dependency and this hurts)

So the issue is that we are stuck in an inescapable, inadequate equilibrium.

The system will collapse before it can ever get fixed.

Something like that. Also what I described might be more of a right-libertarian version. And it's too hand-waivy, because I'm a bit too tired for this.

But your description is definitely too barren to do the position full justice.

Expand full comment

Thanks, that's very interesting!

Expand full comment

>Another route, namely coercing parents into making good decisions for their children or limiting the authority parents have over their children is also politically blocked, as conservatives and also many on the left will have none of it.

NB: there are solid reasons for blocking this. The big one is that a multiparty democratic state in which parents' ability to counter state propaganda is too limited is unstable and will on a timescale of years to decades become a stable one-party state (specifically via someone being in power long enough to brainwash enough children to consolidate an unbeatable coalition). The obvious example is Hitler, although Putin's also managed it.

Expand full comment

Think I agree with most of your points, but "Poverty is a necessity of an economic system, which at the same time is capable of feeding 10 billion people." is a pretty intense assertion, and I'm not sure what external source you'd use to try and back it up.

Expand full comment

First, we need to define poverty as something which by definition won't always exist, no matter how rich everyone is.

Then, once we have a definition of poverty which is absolute, as opposed to relative (i.e. bottom 10% of income earners!), we can begin talking about the various solutions to that and who it is best to incentivize to actually accomplish them.

Expand full comment
Apr 4, 2022·edited Apr 4, 2022

We have limited resources.

If these cash-transfer programs don't actually help in the long term, then we should be spending our money on other things that will help in the long term.

Poverty isn't "a necessity of an economic system". Poverty exists because some people are not very productive. If no one in society had an IQ below 115, and no one was mentally ill or addicted to drugs or suffered from impulse control disorders, the poverty rate would be very nearly 0.

Expand full comment

Can you explain that last paragraph? How does that relate to the fact that two countries with the same average intelligence and substance abuse rates can have dramatically different poverty rates?

Expand full comment

First off, no country meets the criteria I laid or, or even comes close.

Secondly, historical factors will result in different presents. A country that was under the thumb of socialists for decades will have stunted economic development relative to its neighbors. A country 300 years ago would be considered almost entirely poor by modern standards. Socialist countries were poor because socialism is an insane ideology with no basis in reality and severely damaged the economy. However, these sorts of historical patterns are temporary, as seen by the Asian Tigers, which emerged from underdeveloped status and became developed countries very rapidly.

Thirdly, "poverty rate" is defined in multiple different ways which vastly changes measured poverty.

The US has basically eliminated absolute poverty, for instance, once you take government assistance into account, and there are few people who actually live in material poverty after government assistance is taken into account.

Expand full comment

Oh boy. Our cruxes are so far back that we're not going to get anywhere, but thanks for elaborating.

Expand full comment

I think my point is exactly that: it seems like researchers feel like they HAVE to fudge it, because if the negative effects of poverty aren't quantifiable then SOLUTIONS to poverty are not justifiable. I think the whole line of reasoning is toxic and reflects a deep social failing which in a funny way is connected to a veneration of STEM-based thinking about the world. It's a bit like pegging teacher pay to standardized test performance, as if the only learning that counts must be quantifiable.

Expand full comment

If we can't measure the learning, then how do we know it counts?

Expand full comment

This comment is... a very good example of the point G Retriever is making. Forget any individual case for now. The point is: we shouldn't assume that everything we care about can be easily and correctly quantified. When we use peer-reviewed studies as our only means to justify something, we're implicitly assuming that there are no effects (good or bad) that we can't/don't know how to quantify. And this is an obviously terrible assumption. This doesn't mean we stop caring about evidence, or we just go with our gut, or we don't try to quantify things. Just that it's important to remember that this one particular way of finding truth (peer-reviewed papers in prestigious journals) is not The One True Source of All Knowledge.

Expand full comment

I didn't say peer-reviewed studies are the only kind of evidence. I asked how do we know the unmeasured counts. Let's grant there's something you care about and can't quantify. How do you know anything you're doing is having any effect on that at all if you can't measure it?

Expand full comment

You can't, directly, but the world is complex. Civilization functioned for thousands of years without trying to force every aspect of human life under the dissection lamp to be reduced to neat little columns of data.

Expand full comment

Let's say there's no effect of poverty on intelligence. Instead you get 'street smarts' that help you stay alive. Would that be a reason NOT to end poverty?

Say there's no effect of poverty on propensity toward violent crime. Is that a reason NOT to end poverty?

Say there's no effect of poverty on a long list of ills we once thought were related to poverty. Do you need a reason to end poverty, or can we all just agree, "Poverty sucks, and we don't have to justify or quantify how much it sucks. Let's agree to work on the problem because we don't like it."

Expand full comment

To me, another big problem with the study is that there is no hypothesized mechanism by which poverty supposedly retards brain development.

So if a correlation between an extra $330 and different brainwaves is found, there is no plausible explanation for why that would have happened. You have to imagine that the extra money somehow changed one or more other (unknown) variables that then, somehow, through an (unknown) process, that impacted brain waves.

Shouldn't the research first be directed at figuring out exactly what changes child brain development (nutrition, maternal attention, etc.)? And then researchers could look at how poverty might affects those variables. By contrast, this seems like an "advocacy study" that was designed to put the cart before the horse for political reasons.

Expand full comment

Maybe using individual cases to build a rough model of the underlying dynamics.

For example an author is writing a story and wants it to be interesting. He probably has a model of what factors make a story interesting based on other things they've read, and so can test things they might write against their internal model of how stories work.

For public policy questions it would be the model of human nature under various circumstances. Less reliable than a high-powered study but maybe better than nothing.

For the education issue you questioned earlier for me this actually goes the other way. In my experience most people don't remember the vast majority of what they learned in school after a few years. If we get someone to "know" something during a final, then they forget it promptly after, what are we really achieving?

Not *all* education, but I'd argue that a huge proportion of modern education is useless for most people (aside from the signalling factor), and we also fail to teach them other things that would be more relevant to their lives.

Like, is chemistry still a thing that a huge % of the population should spend years in high school on? Sure you need it to *become* a doctor, but what % of doctors even remember how to do low-level chemistry work?

And why not teach basic medicine in high school, in an era where lifestyle diseases are everywhere?

There's tons of stuff like that, it feels like *how* we teach things gets all the attention when we should question *what* and *why* more.

Expand full comment

The motte is that not everything that matters can be measured. The bailey that typically follows is that therefore my snakeoil that doesn't improve any measurable metrics should be used anyway because of my gut feeling that it helps. Similar thought processes held medicine back for centuries, and are still undermining education. It's good to invent ways to measure the things we don't yet know how to measure and actually prove whether the intervention improves those things.

Expand full comment
Jan 26, 2022·edited Jan 26, 2022

Counterpoint: those clever little ways we invent to measure things have a large chance to turn out to be WRONG (see Scott's post about rationality's failures on the old blog, such as London vs. Chicago traffic). Sometimes moral intuitions are, in fact, right.

Expand full comment

I felt the urge to agree but what if one day I am the one with a strong intuition that goes against the imperfect measurements?

Expand full comment

Effective medicine still contains a lot of things that are poorly understood, obviously work, and are extremely hard to capture in RCTs (see Scott's point about parachute RCTs).

The most extreme example would be physical therapy - every effective therapist is working off pre-EBM knowledge and getting results, while academia is chasing its own tail trying to prove the sky is blue. If you have lower back pain, in a majority of cases the best your EBM doctor can give you is a painkiller, and questions of etiology are met with "idk lol".

Extrapolating this to other fields is left as an exercise to the reader.

Expand full comment
Jan 26, 2022·edited Jan 26, 2022

We could be like Freddie, who says (in effect) "you aren't going to find any effect on kids from pre-K because there is no effect. But letting people send their kids to pre-K makes their lives (both the kids and the parents) happier, so just do it already, and don't set ourselves up for failure when we don't find any results in 10 years."

Expand full comment
Jan 26, 2022·edited Jan 26, 2022

If we want subsidized daycare and greater child-having subsidies in general we should just do that. They do it in Finland etc. You don't have to pretend it's for some irrelevant/dubious scientific purpose. Unless it's to trick people into something they don't really want. But that is where a lot of this ends up, morally. People with "noble goals" trying to herd the sheep and playing loose with the virtue of truth.

The other aspect is when it comes to the money spent, it's quite possible that "more daycare" (e.g. longer hours) is really the better economic direction than a bunch of educational attainment statistics in very small children. Plus it would affect the design of the facilities and just what it is that is healthiest for these children to be doing all day.

Expand full comment

Well fuck it, I'd be happier with free ice cream. Should the government give out free ice cream?

Expand full comment

At whose cost? As another poster says, ice cream and chocolates make very many people happy. What's the line to be used to say for this we shall use tax money, and for that we shall not? To me it's quite clear that there has to be a significant externality or public good aspect involved.

Expand full comment
Jan 26, 2022·edited Jan 26, 2022

people see their money go down a blackhole with education and welfare spending. I hardly think it unreasonable that they want some quantifiable measure

Expand full comment

It's unreasonable if such a measure is not practically knowable.

Expand full comment
Jan 27, 2022·edited Jan 27, 2022

If literally no measure is practically knowable then the those whose money is being extracted would be in the right demanding that the spigot be turned off completely.

Expand full comment

Why would that necessarily make them in the right? It certainly weakens the arguments that people in favor of education and/or welfare, but it doesn't eliminate them completely. Not all knowledge comes through measurement and analysis of the resulting quantitative data.

There are a number of outcomes that I will never expect to have a corresponding quantitative measure of a reasonably high quality (ex: a measure of how good the art produced by National Endowment of the Arts grantees is). There are a number of other outcomes where a reasonably good quantitative measure may be possible, but it wouldn't be practical due to the high expense of collecting the data relative to the expected benefits of collecting it (can't think of an example off the top of my head, unfortunately).

When collecting high-quality quantitative data either isn't possible or isn't practical, we don't have to throw our hands up in the air and stop attempting to answer the question. Rigorous, objective (or as objective as anything ultimately done by a person ever can be) analysis of qualitative data can be very informative!

If it ultimately isn't possible/practical to effectively quantify any of the outcomes of education/welfare spending (which seems improbable to me), this doesn't hand victory to people opposed to this spending. A lack of quantitative proof of the effectiveness of the programs isn't the same thing as a lack of proof of the effectiveness of the programs. It also isn't quantitative proof that the programs are ineffective. The quantitative effect is unknown/unknowable, not necessarily nonexistent. All that has changed is that the arguments for/against the policy in question have to be carried out in the somewhat fuzzier world of qualitative analysis.

Expand full comment

The US has lots of anti-poverty programs and spends a significant amount of money on them. Most of them have been around for decades, too. SNAP spending alone was $55 billion for 2019, $1,548 per beneficiary per year, and the program dates back to the 1930's. I'm not sure why you think it would be necessary that researchers fudge statistics to justify anti-poverty interventions *now,* in 2022, coming up on a century after the New Deal was enacted.

Expand full comment

"Necessary" Isn't the right word per-se. The researchers may or may not feel a moral obligation to do so, regardless of current politics.

As a side note, SNAP is the most basic, fundamental sort of anti-poverty program in that its literally there to prevent families from starving.

Expand full comment

Did families regularly starve to death before SNAP? Otherwise there was presumably a more basic anti-poverty system on place before.

Expand full comment

Yes, there was, it was called "having strong communities". Sadly, in America at least, that ship has sailed- Americans are a fundamentally nomadic people who are more likely to view their neighbor as an enemy instead of an ally by multiple orders of magnitude.

Expand full comment

The concern over being unable to support something that is obvious correct and apparent due to an inability to quantify it stems from learned helplessness and inability to simply accept things as they are perceived.

Scott is saying that despite not having numbers to back cash transfers, he still feels they are correct and supports them - that's great! The authors of this study are marginally more uncomfortable with their own judgement than he is, and preferred to support it with very questionable to avoid having their opinions stand on their own.

The fundamental issue is the blanket severe discomfort with simply declaring and accepting facts based on a posteriori lived experiences. It's interesting that on this issue both Scott is quick to state that he is comfortable supporting cash transfers, but otherwise tends to write from a very evidentiary perspective.

In a sense it's Gell-Mann Amnesia for knowledge; you arbitrarily decide that some things are so obvious you can ignore that studies are flawed, but then turn around and expect studies for others.

Expand full comment

Yes. And doesn't that also go to the even bigger issue of publication bias. No one wants to publish a social science study that says: "life's a bitch, and there's nothing we can do about it." So those study results either disappear unpublished or get recycled (p-fished) for some (more hopeful) hypothesis.

Serious question: Does preregistration still allow researchers the option of just spiking their study if the data comes out "wrong"? I suppose it does. But this study apparently involved too much money and effort to tank (all those $330/mo. payments have to show some return on investment).

Expand full comment

Some preregistrations come with a pre-publication agreement, but my sense is that most journals still aren’t willing to commit to publishing negative results.

Expand full comment
Jan 26, 2022·edited Jan 26, 2022

The questions isn't a binary "solve poverty" vs. "don't solve poverty", but a question of how much effort* to spend on reducing on poverty. Reducing poverty more isn't costless**, so how much effort people support is a trade-off. Any data point can slightly shift what point to choose on that trade-off.

Even what to define as poverty at all is non-obvious. Most of the world, and for most of history all the world, would consider $20,000/year wealth, not poverty.

*: read: other people's money, usually

**: because if there are costless ways of reducing poverty that we know about, we're already doing them

Expand full comment

Cash transfers are not solutions to poverty in any case. The largest reductions in poverty by many orders of magnitude have come from social and political institutions that allow for and encourage economic growth. This should be obvious if you look at the broad sweep of history, but it becomes painfully obvious if you consider the recent experience of China and India.

Expand full comment

These are two very different effects. Yes, high growth in emerging economies is great for reducing poverty, but it’s not particularly applicable to reducing poverty in an advanced economy with lower growth rates.

Expand full comment

Poverty is bad. No doubt about it.

It does not necessarily follow that cash transfers are either an optimal or universal solution. Perhaps they would work very well for parents who are financially savvy (my mother was incredibly good at handling budgets even when she financially struggled), but make things in the family worse if the parent is addicted and will buy more drugs instead. That happens; I knew enough alcoholics to know that their finances are a hopeless black hole with the other end at the local pub.

Of course, a careful policy based on each individual family's needs cannot be easily written into law, so it won't be pursued.

Expand full comment
Comment deleted
Expand full comment

We don't. I lived in a block house with at least 7 other poorish families. That was the Rust Belt in Czechoslovakia when the Iron Curtain fell and the rusted old heavy industry went to the dogs. Loss of previously stable jobs was sudden and the city never really recovered.

It was fairly obvious that some were better at economic decisions than others.

Also, of the alcoholics I know right now, none is particularly needy. All middle class people who turn their earnings into booze.

Expand full comment
Comment deleted
Expand full comment

Families of said alcoholics are miserable, though.

The question is how to keep that money out of the husband's hands and going to the needs of the wife and kids.

Also, studies of alcoholism indicate a strong genetic component. I am all for research into treatment of alcoholism, but contemporary psychologization of said disease helps only a few people. My friend was in a rehab; their success rate was about 15 per cent, less than with many cancers.

Maybe Scott could chime in, he is a psychiatrist after all. Is treating alcoholism with psychology modern shamanism (because it surely seems that alcoholics MUST have some underlying psychological issues) or real peer reviewed science?

Expand full comment

Trying to solve addiction by enforcing poverty doesn't seem like a winner.

Expand full comment

Oh, it definitely isn't, but you seem to argue from a position that cash transfers are the optimal and universal solution to alleviate poverty, so whoever casts any doubt of them wants to enforce poverty.

Are they? Are you sure? After all, countries that elevated themselves from poverty to development - and there was a lot of them - didn't do so through funneling cash to all inhabitants.

Expand full comment

Cash transfers was Scott's position, not mine.

Expand full comment

I didn't see anybody here arguing that cash transfers were the _optimal_ decision. Just that it's a reasonable way to take out a substantial chunk of the problem. I feel as if you (and perhaps others here) are arguing to do nothing about the problem until we come up with a better solution than cash transfers. (If you already have a better solution, I'm all ears.)

Expand full comment

Then please don't feel so. I never said anything like that explicitely and telepathy over TCP/IP does not work. If I wanted to gut certain types of spending, I would proclaim it openly; I have nothing to lose by being candid on the Internet.

I would be a great friend of a more personalized approach, though. One size fits nobody.

I realize that it would probably be *more* expensive, but I think those extra money could buy extra efficiency.

Expand full comment

We're debating this like a majority of people in poverty are addicted and spending all of their money on alcohol instead of diapers - can we temper this with a statistic? Because if it's only, say, 5% or 10% of people in poverty with children are gratuitously neglecting them, then what about the the other 90% to 95% who aren't?

Expand full comment

I agree with your position of refusing to mouth moral support for someone's goals as the price for criticizing their logic or data.

There should be a word for the rhetorical habit of reciting one's agreement with X as a predicate to establish one's bona fides to criticize Y. (E.g., "I hate Trump and white supremacy as much as the next guy . . . but . . .")

For one thing, the cumulative effect of these en passant pro-forma recitations of what is supposedly clear to everyone tend to have an "anchoring" effect on the Overton Window that may ultimately be more significant than the particular argument being made.

Expand full comment

Mandatory throat clearing

Expand full comment

After reading this I feel bad about asking you to clarify your precise COVID vaccine resolution in line yesterday. Thanks for continuing to post despite well meaning people trying to vetocracy every sentence you make, and I hope my paying for your posts+this comment offsets the negative reinforcement I gave you yesterday!

Expand full comment

IMO you made the right decision and your reasoning is sound. However, it comes with a trade-off. Many people, including (particularly?) "educated" people, weigh new information strongly by (partisan) identity. Failure to disclose tribal affiliation (or sympathy) means that those people will not consider your analysis.

Expand full comment

A query: If all evidence gleaned from history were to indicate, beyond all doubt, that cash transfers have had no positive impact on the objective of reducing poverty; and if, on the contrary, all evidence made available by history, both American and worldwide, were to suggest that cash transfers actually prolonged poverty and exacerbated its symptoms--would it be fair to conclude that a position for cash payments is not compatible with a position against poverty?

Hypothetically speaking, of course.

Expand full comment

I agree that poverty is bad, which is why we should condemn rather than reward women who irresponsibly choose to have children while impoverished.

You get what you incentivise, which means that incentivising poor women for having children is one of the most horrible things you can possibly do.

Expand full comment
Comment deleted
Expand full comment

"If you lift everybody below the 60% line to above it, then nobody is considered poor anymore, but also, the median income is not affected by this at all."

You're encouraging low IQ women to have more children, which makes society on net balance worse off.

"What's the point of being capable of feeding 10 billion people, if you then just decide to not do it?"

Because there's an absolute ton of negative externalities associated with having 10 billion people on earth, especially considering those additional people are almost assuredly going to be below average IQ.

Expand full comment
Comment deleted
Expand full comment

Jeffrey, you seem to think IQ is only ever going to be relative and never absolute

Id like you to consider the fact that in our modern world, we can observe things people with higher IQs can do which those with lower IQs simply cannot do

Jeffrey do you really think in a world where those with lower IQs have successfully bred more, that we will have more or fewer people who can do high level skills?

Because it seems pretty clear to me, however which way you'd like to describe IQ, it does have absolute and not merely relative consequences in the world.

You seem to be arguing on that basic fallacy-that it is only relative and not something that produces absolute concrete results, independent of one's position on the general IQ distribution.

Well I'm sorry to say but your argument that what poor IQ women do is none of our business falls pretty flat here. It actually IS my business if I realize that helping a poor lady now will add to human suffering later on. I truly don't mind the spending. But somethings are too high of a cost.

Expand full comment

I don't know why you keep bringing up this ten billion people thing. Poverty in the US isn't about lack of basic food, food costs a few bucks a day and is very easily obtainable from charity if you really need it for some reason. Food is solved.

Expand full comment

Almost 14 million households in the U.S. had food insecurity during 2020: https://www.ers.usda.gov/topics/food-nutrition-assistance/food-security-in-the-u-s/key-statistics-graphics/#foodsecure

Not quite yet solved I'd say.

Expand full comment

Poverty is inversely correlated with number of children. If you want "low IQ women" to have less children: give them money!

Also, as it happens, its a good moral choice. And also it makes my personal living better because I have to deal less with homeless people.

Expand full comment

Inversely correlated with money? Or being the kind of person who can make decisions which lead to having money?

Expand full comment

This is a very well documented effect across and within populations. So there is probably even a decent chance of it being causal. Here is my data

https://ourworldindata.org/grapher/children-per-woman-by-gdp-per-capita

Expand full comment

Where's the evidence that the causation is "have money" -> "less kids"?

Expand full comment

If being below 60% median income is the definition as being poor, even if you don't count the transfer payments themselves as income, that means that you've defined poverty so that *by definition*, poverty will exist without transfer payments and the only way to alleviate it is with transfer payments.

Defining poverty that way, rather than as "unable to pay for food, shelter, etc." is a numbers gimmick to justify social spending.

Expand full comment

Technically, that definition does not require poverty to exist without transfer payments. There is no requirement that people exist who earn less than 60% of the median income (i.e. if the median income is $100, there's no requirement that people exist who earn less than $60).

In practice, such people do exist in real distributions with significantly-nonzero Gini coefficients, but they are not required by definition.

Expand full comment
Jan 27, 2022·edited Jan 27, 2022

I think you should understand my point. Yes, in theory if everyone earns similar amounts there may be nobody below 60% of the median. But that measure isn't a measure of poverty at all. It's contrived so that if everyone gets richer, this has no effect on the poverty level whatsoever; in fact, if one person gets richer, the poverty level is likely to go up. And it's defined so that poverty has no relation to whether people lack food, shelter, clothes, or ability to pay their bills. If Beverly Hills were a country, it would still have a large contingent of poor by this definition.

Expand full comment
author

Even if you wanted to do this, how would you do it without also harming the children?

Expand full comment

If raising children in poverty is bad, we could treat it like any other form of abuse and take the children and place them in foster care or up for adoption.

Expand full comment

Brilliant. Definitely no chance for horrible unintended consequences with this idea.

Expand full comment

There are many forms of adversity children experience, poverty being just one of many. Removing children from their parents results in other forms of adversity, so the reason to remove them must be really egregiously life-threateningly bad, not just like not as great as what richer kids have.

Poverty is not a form of abuse by any stretch of the definition of abuse. Abuse and neglect have legal definitions and standards and it's a really good thing they are distinct from adverse experiences that kids have had forever -- including things like war, emigration, illness, disability or death of a parent, divorce. It's like saying we should take children away from their divorcing parents since divorce is hard on kids.

Kids whose parents move around a lot and have to change schools a lot experience bad impacts, including being more subject to bullying. Let's take those kids from their parents too for moving their kids around too much.

There's pretty good research showing that children of highly critical and perfectionist parents can suffer longer-term psychological impacts than ones who experienced some physical violence. I work with the grown kids of a lot of those kind of parents and I kind of wish I could take them away from those parents now because they are still causing harm. The grown kids I've worked with who grew up in poverty are way better off than the kids who grew up with well-off narcissistic parents.

Expand full comment
Apr 4, 2022·edited Apr 4, 2022

They've tried this. Unless the child is in truly dire circumstances, this is actually a net negative for the child. Poverty isn't bad enough to outweigh the drawbacks of being put into the foster care system.

Expand full comment

One way is by making transfers to poor women (perhaps ones who already have dependent children) conditional on receiving long-lasting injections preventing pregnancy.

Expand full comment

Why wait until they have children? Why not just start forcibly mass-sterilizing people that the state decides shouldn't have children? Don't worry, I'm sure the state will make very good judgments, and the process will be fair and transparent and easily corrected for errors.

Expand full comment

“Why do A when you can do B, which is horrible and how dare you suggest doing B”

I don’t find this to be the best form of argument.

Expand full comment

We don't have to rely on the state making good judgments, or being fair & transparent or prone to correcting its own errors. It's also easier if people are opting into the program for benefits rather than being forced in, requiring enforcement effort for those who want out.

Expand full comment
founding

If this is an utilitarian/consequentialist framework, then maybe deterrence is more than enough. Sacrifice a few poor people each year randomly in unusually cruel ways, eg. by not giving them insulin, imprison them for some extremely long time if they do some kind of victimless crime, or whatever...

Oh wait.

Expand full comment

Many people (particularly those worst off) tend to be hyperbolic discounters. Low probability high impact events get discounted. This was Gary Becker's greatest mistake:

https://marginalrevolution.com/marginalrevolution/2015/09/what-was-gary-beckers-biggest-mistake.html

Expand full comment

Poverty itself encourages short-term thinking. If you want people to make better decisions, pushing them into further poverty isn't what you want to be doing.

And anyway, if they're already poor, and they're deciding to have kids they're likely to have trouble supporting, what on earth makes you think that that keeping them poor or making them even more poor will suddenly lead them to make the exact opposite decisions to the ones they've been making? You're inhabiting a world where you just wish harder and harder that these people are Homo economicus, and they're not.

Expand full comment

What percentage of women actually choose to have children for the cash grab? Surely after all the outcry in the 80's about "welfare queens," we would have some statistic on the percentage that actually does this versus, say, endemic poverty cycles that have poor people who were raised poor as their only experience and then wanting their own families and figure they will somehow make ends meet because their mom did.

Expand full comment

> What percentage of women actually choose to have children for the cash grab?

They don't have to consciously be deciding to have children for the purpose of receiving welfare for the incentive to have an effect. That's a basic misunderstanding of how incentives work

Expand full comment

This is the wrong way to frame it. It's not a binary "choose to have kids for the cash grab" or not, it's that a bit more cash slightly changes the probability by slightly changing the balance of considerations.

If you want proof that people's reproductive decisions are sensitive to financial incentives, look no further than this majestic work:

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3665046

Expand full comment

My actual point was that this discussion is extremely focused on the poor who are taking advantage of the system but I'm asking how many is that, really? Is it a rounding error? Is it a majority or a plurality? If the vast majority of those in poverty are trying their best to be contributing members of society, then the small number (if it's a small number) of scammers should not derail helping those who are below the poverty line.

Expand full comment

Sigh...you've had this explained to you well over a dozen times by now. Cognitive ability varies enormously within the populations of even developed western countries. This has enormous social and economic implications. Understanding the causes of this variation helps us understand the world better and therefore develop more effective policies.

By your logic, one could say "you don't need science to know that being more educated is good, so why are we doing all this research about the heritability of intelligence when we should just increase spending on education?"

If poverty doesn't explain difference in cognitive ability, then we basically have to admit that the heritability studies are correct, these differences are mostly genetic, and the only way of improving equality is through direct wealth transfers. But in a remotely sane world, this reality ought to have enormous implications on how we think about reproduction and immigration, but we don't live in a sane world, we live in a completely backward world where people like you (yes, literally) describe mainstream heritability research as "1920s eugenics".

Expand full comment
Comment deleted
Expand full comment
Jan 26, 2022·edited Jan 26, 2022

Low IQ is highly correlated with poverty. You don't need to run a study to figure out why, stupid people have difficulty managing their life in our highly complex society. The only way this can be done non repressively I can think of is to attach welfare benefits to birth control in some form or fashion. So say if you have 1 child you can't take care of yourself, society will help you financially with that child but you have to agree to stop having kids in a provable way. This actually doesn't only select for low IQ but any traits that lead to poverty (for instance I suspect emotional instability and proneness to drug addiction are traits not 1 to 1 correlated with low IQ). The next generation will have fewer of these problems and it will remove the dysgenic selective pressures we currently have which otherwise will mean more poverty in future generations.

Expand full comment

Even granting everything you say is true, your "solution" assumes that the only possibly way to address these problems is through altering the gene pool using state power (can this possibly not be called eugenics?). But this is a ridiculous assumption. The concept of something being "determined by genetics" is an abstraction; genes all act through some mechanism (they're not magic after all), and that mechanism can in principal be influenced in other ways. Just because we haven't figured out how yet doesn't mean we can't, and it doesn't mean we're better off trying to manipulate and/or coerce people into different reproductive choices.

For example, obesity is something like 70% heritable. But until the mid 20th century it affected only 1% or less of the population, whereas today that's closer to 40% (in the US). Obviously "it's genetics" is not a satisfactory explanation of obesity. And it doesn't follow that "genetics" would be part of an effective solution.

We haven't discovered how genetics influences IQ, emotional problems, drug addiction, etc. But each of these has mechanisms that can be discovered and influenced without resorting to eugenics. The unknown unknowns of eugenics alone should give us a very strong prior against them.

Expand full comment

Altering the gene pool using state power? Currently we're using state power to promote dysgenics, selecting for traits which are the least likely to be net contributors to society, by taking away resources from people who net contribute and giving them to those that net take from society. Insuring we will have more people that do not net contribute in the future. I'm suggesting we "stop the bleeding" by adding conditions for helping people who can't seem to help themselves.

Besides it being undeveloped technology, the unknown unknowns of straight genetic engineering (gene editing) is much more dangerous than applying a small and uncoercive selective breeding policy. I am 99% certain if we do do genetic engineering it is going to have some horrifying outcomes for some people, although I still think we should progress with it.

Expand full comment

Ah, yes, the classic game of "rig the system, then blame the people losing the game for not having the system rigged in their favor."

Tell you what: would you be willing to personally tell some poor person, to their face, "You are my genetic inferior and do not deserve the same rights as me?" Because if you aren't, then why should I take you seriously?

Expand full comment

AFAIK, somehing of the kind had been done in India about 50 years ago, when a poor man could get a radio, a moped or something if he had himself sterilized, even Douglas Adams had joked about that. To sterilize women would have been too much of a procedure, I guess. I doubt these programs had much impact on average indian IQ but I may be wrong.

Expand full comment

I read an article that they do it with women as well and it was something they were doing recently. The article I read said they got new frying pans, most of the women had had all the children they wanted. It's a good policy and especially for India which had been pushing up against the Malthusian ceiling. If a set of frying pans tempts you to give up your fertility it's hard to imagine that you would be able to give a child a good life. As long as the policy is voluntary and not coercive, of course some will argue that it was coercive because life is in itself coercive.

Expand full comment

On the other hand if you don't think that many of the negative effects of poverty are measureable, I doubt you've experienced it. It's not like anyone doubts studies that show poverty is linked with increased anxiety, for instance. The observable issues with poverty tend to be the more consistently measurable ones, because they have big enough effect sizes to observe experientially.

I don't buy your argument here that these social scientists are justifiably under pressure to fudge these results because of a misguided social need to have the effects of poverty be measurable before we act. Some outcomes of poverty are quite measurable and quite bad, and that's already well-established. We have plenty of motivation to act if all that's holding us back is quantification.

I think the incentive to fudge these numbers is far more venal. When you produce novel research that supports popular political positions in your field, you get published in the New York Times, and it's much easier to get tenure/future grants/prestige.

Expand full comment

Has anybody done any work to determine which aspects of poverty are the problem? As a hypothetical, I can see a poor family in a safe area with guaranteed housing, a nearby park and library having fewer problems than a better-off-financially family which has to worry about gentrification and violent crime.

Expand full comment

I think that even among people who agree that 'poverty is bad' there is room for a great deal of disagreement as to the definition of poverty (and of 'bad') - I grew up under circumstances that were absolutely defined as 'impoverished' then and now, and neither I nor my parents felt it was horrible either often or on the mean.

Expand full comment

Sure. But nobody thinks the badness of poverty has anything to do with gamma vs alpha brain waves. Pretty much everyone agrees things like higher stress, hunger from missing meals and that sort of thing are the issues.

Expand full comment

I agree, but I think people care about this more in terms of meritocracy debates.

The basic argument that underlies a whole lot of public policy questions is 'poor people underperform on measure we care about because poverty grinds them down and if we give them opportunities they will improve and produce more value than we spent to help them' vs 'poor people underperform because they're genetically inferior an any attempts to give them opportunities are wasteful, we can decide how much welfare to give them but no other policies matter.'

Expand full comment

'darwin' appears to be against social darwinism! Nominative determinism swings and misses!

Expand full comment

"Social Darwinism" (financial/social success = "fitness") was contra actual Darwinism (reproductive success = "fitness"), so this is more like a ball than a strike.

Expand full comment

Eh, financial and reproductive success are at least correlated. I say it's at least a foul tip.

Expand full comment

I'm generally social Darwinist, but I don't think it naturally follows from Darwinism in any way. Social Darwinism is about oughts, Dawinism is about ises.

Expand full comment

I don't think "attempts to give them opportunities are wasteful" follows from "poor people underperform before they're genetically inferior" (what does that even mean).

You don't need to be von Neumann to do _many_ specialist jobs, in particular in trades, it mostly requires a lot of experience with the domain.

I'd invite self-proclaimed 150 IQ Übermenschen to DIY renovate their house and report how well it went, compared to a trained team of 90 IQ physical workers who know their shit. As the classic article says, reality has a surprising amount of detail. Train people to work with that detail so you don't have to, pay them well, and you have a functional society.

Expand full comment

I've never met anyone who didn't think poverty is bad. If you're implying that 'we should give $300 a month to poor people' self-evidently follows from 'poverty is bad,' it doesn't.

Expand full comment
Apr 4, 2022·edited Apr 4, 2022

There's four issues:

1) How much money is worth spending on "poverty is bad, we should alleviate it" compared to other things that might have a larger net benefit to society. We have limited amounts of resources, so we want to allocate our resources as efficiently as possible.

2) These programs don't actually *cure* poverty; it's palliative care. So you throw money at it, and it doesn't actually solve the problem, so you spend money on it every year, decade after decade, as opposed to spending things on other things that might actually lead to lasting societal benefits.

3) Many people are okay with giving people a hand up (i.e. getting people back to a good place where they can be productive members of society) but don't like giving endless handouts to people, as they see it as throwing their money down a hole (which it is). As such, there are political motivations towards finding these results.

4) People are in denial of the high heritability of intelligence and don't like the idea that we live in a meritocracy and that many poor people are poor because of genetic traits, so looking for reasons why this isn't true is a major thing. It's highly probable that the only way to actually "cure" poverty societally is to engage in broad scale genetic engineering of the population, which people find unpalatable, and that isn't something we will be able to do for 50-100 years, and they want solutions now.

Expand full comment

It doesn't help that many people routinely overstate the degree to which income correlates with educational data, which I think is contributing to the credulity here. Yes, all educational data has income stratification, but it's smaller than liberals constantly insist, and people have persuasively argued that's a racial effect masquerading as an income effect. (As in, you throw race into a regression and income ceases to be a significant predictor.) Claims about school funding and expenditures are even worse. But "it's the money, stupid!" is just a really tempting standpoint for liberals.

Expand full comment

The San Gabriel Valley east of Los Angeles is a fairly pleasant place with numerous small school districts, all of which, last I checked, spend relatively similar amounts per student. Average test scores tend to correlate with the ethnic make-up of the student body, with the Chinese-dominant districts such as traditionally rich San Marino and formerly middle class Arcadia (where my cousins once lived but is now dominated by Chinese) scoring the highest, white districts second, and Latino districts third.

Expand full comment

That seems surprising to me. Is it not the case that within racial groups higher income goes with higher education?

Expand full comment

Yes, but again, the correlation is considerably lower than people seem to think. People constantly say "the SAT is just an income test," but for example in one of the biggest and most representative datasets available that r squared is .0625. So less than 7% of the variance is explainable by income. And note that, for example, Asian students from the poorest income quintile outperform white students from the second-richest income quintile on the SAT Math.

The bigger question is why people think that this is just a priori true. Yes, I can think of ways that lower incomes reduce performance, but why would that necessarily be a powerful determinant?

Expand full comment

I wasn't even thinking of income as directly affecting performance. Instead, educational credentials sort people into different income brackets (though I realize there are people with PhDs making less than some other people without any college) and their kids tend to wind up similar.

Expand full comment

And what is a well-known, very scientifically-studied means through which parent traits are passed down to children?

Expand full comment

Man I really find it hard to believe that with what you know, deep down you don't just realize the brutal truth about black and white IQ and achievement gaps.

Expand full comment

Heredity is one obvious way. Additionally, parents determine where you grow up.

Expand full comment
Jan 29, 2022·edited Jan 29, 2022

I wonder if someone can help me understand why one could not equally say, "if you throw income in race ceases to be a significant predictor." My reasoning is motivated here: I would much rather this to be true. Thanks if you can explain why/whether we know one or the other is the causal variable.

Expand full comment

Multivariate regression analysis has this somewhat mysterious property - for example, the order in which you enter variables matters a great deal for the analysis. It's a function of overlapping sums of squares. Unfortunately I am not really statistically equipped to explain it, but there are plenty of people who comment here who are.

Expand full comment

Compare low income whites to high income blacks.

Expand full comment
Jan 26, 2022·edited Jan 26, 2022

"It makes it feel real - you can literally the effects! " The "see" is missing. I guess it should be: "...you can literally see the effects!"

Expand full comment

I'll post here too I guess:

First paragraph : but a bunch of other people beat me to it (...) have beaten me to it.

+ fat fingered a bracket in "It's":

Getting to the paper itself: it’s called The Impact Of A Poverty Reduction Intervention On Infant Brain Activity. It’[s part

Expand full comment
author

Sorry I am so bad at this, and thanks.

Expand full comment

No need to feel bad, graded against professional writers that do their own editing, I'd put you in the upper tritile. Typo threads are practically an internet institution at this point.

Expand full comment

Did you write tritile instead of tertile as a joke to illustrate the point?

Expand full comment

I wish, but I'd genuinely seen it spelled the former more often than the latter. A different flavor of irony, then!

Expand full comment

I've only ever seen "tercile".

Expand full comment
Jan 26, 2022·edited Jan 26, 2022

EDIT: I now understand you meant multi-dimensional into single-dimensional, which makes way more sense.

I think "non-inherently numeric result" is the wrong way to put it. I've worked with EEG, and the output is very much numeric and easy to immediately work with, the issue is how noisy it is (and, as you said, you can invent whatever hypothesis you want by reading patterns in the noise). You do seem to understand that later, for what it's worth, but it just makes the initial explanation more confusing.

Expand full comment
author

I've had very little contact with EEG, so thanks for sharing your expertise.

Expand full comment

I have lots of experience with EEG (did my PhD on it) and your post doesn’t make any obvious errors. I would just clarify that it’s not that some people have “lower-frequency brain waves“ than others, but that the lower-frequency parts of their EEG are less loud, either proportionally speaking or in absolute terms. We always have a mix of oscillations at multiple frequency bands going on, where to a first approximation long distance coupling is reflected in the lower frequency parts and high frequencies are more local.

There also are differences in the peak frequencies of each band, but that’s a different issue.

Meanwhile for example the Vox article gets the basic neuroscience of EEG activity wrong.

Expand full comment
author

Thanks, I should have said something like "disproportionately low-frequency". I'll edit that.

Expand full comment

'papers apparently “do not have to go through anonymous peer review”.'

Stuart Ritchie: "Contributed" submissions do get peer-reviewed - but there must be SOMETHING easier about this way of submitting articles, otherwise why would it exist? My guess is that the Contributor's handpicked choices for reviewers are almost always granted: https://pnas.org/authors/member-contributed-submissions

Expand full comment
author

Yeah, I'm not sure how to reconcile those two things, maybe it's peer review but not anonymous? I've edited that paragraph to make it clearer.

Expand full comment

The real key is right here:

"The Editorial Office sends the manuscript to the assigned reviewers...When the reviews are completed, the contributing member works with his or her coauthors to revise the manuscript in response to the reviewers’ comments."

Most papers submitted to PNAS are rejected by editors without being sent to reviewers. Another sizable chunk are rejected after review, which is entirely at the editor's discretion. Contributed papers are much more likely to get published because they bypass the initial filter and always get the chance to revise after review.

I've published in this journal through the non-contributed route so I've seen how editors can look at mixed reviews and either go "seems fine" or "mixed reviews? no way!". I imagine contributed submissions get the former.

Expand full comment

Indeed, I've had many editorial rejections (which is also anonymous - you only learn the editor if it's accepted), usually just for the stock reason that it "won't appeal to a broad readership". My most exasperating experience with PNAS was the time when they took over six weeks to editorially reject my manuscript.

Expand full comment

Scott, for PNAS, it seems the initial review is with suggested reviewers, followed by an independent peer review where the editor selects additional reviewers. This mechanism exists in multiple journals. For example, the Journal of Clinical Investigation lets American Society of Clinical Investigators request an automatic review, and allows you to select potential reviewers. Like Adam Mastroianni says below, this effectively lets you bypass desk rejection without review by the editor. I recently went through this process with JCI (not PNAS), but assuming it's similar:

-We submitted the manuscript, and gave a list of potential expert reviewers.

-The paper was reviewed and rejected.

-Even though we gave a list of reviewers, the comments were anonymized. We don't know who agreed to review the paper or not

-Despite it being rejected (this was an expected outcome) the process was really helpful, as we got a lot of tough but expert feedback on our work

A family friend who is a member of the National Academy of Sciences chatted with me about this mechanism a few months ago. He said it truly used to be a old boys club where you could chat up an editor and get something published with minimal or lax review, but a combination of poor optics and some pretty weak papers getting into the journal caused them to tighten up their approach.

Expand full comment

Martha Farah, one of the reviewers, has co-authored multiple papers with the last author, Kim Noble. Luby has published with Fox according to Google. So, you can definitely pick "friendly" reviewers for the contributed PNAS article. Farah is an fMRI researcher and not EEG. Unclear about Luby-but also may be MRI-so no EEG (different modality, different processing) reviewers for the paper. Would have been good to see some more "arms-length" review on the paper but maybe that is happening on social media and blogs

Expand full comment
Jan 26, 2022·edited Jan 26, 2022

I recently reviewed a contributed submission for PNAS. The member pre-selects their reviewers and submits the names to the office along with the manuscript (that's why the reviewers are named in the paper - this is NOT normally the case). Everything else proceeds roughly the same, but reviewers will differ about how seriously they take the task, especially knowing that an outright rejection is very unlikely and they can't hide behind anonymity.

Expand full comment

Interesting--you're not even anonymized among the other reviewers? We recently sent something to JCI with suggested reviewers, but the comments were still from "reviewer 1, reviewer 2, etc." so I don't know who's who or if the people I suggested even agreed to do the review.

Expand full comment

Right, and for normal submissions to a normal journal, the editor picks one of your suggested reviewers and then someone you didn't suggest (never only the ones you suggest). In the case of a contributed PNAS paper however, the editor *only* sends it to the reviewers who have been pre-screened by the authors. I don't remember for sure if the names were appended to the reviews we then submitted, so it's possible the authors wouldn't know which review came from which reviewer, although it's pretty easy to guess when there's only two or maybe three.

Expand full comment

But from the PNAS website it looks like that's just Tier 1 of the review process? Am I misunderstanding this? Tier 2 seems to be another, more traditional review:

https://astralcodexten.substack.com/p/against-that-poverty-and-infant-eegs

Tier 2: Independent peer review

The Editorial Office sends the manuscript to the assigned reviewers and to others who may be selected by the Editorial Board member, manages the review process, and collects the reviewer reports. When the reviews are completed, the contributing member works with his or her coauthors to revise the manuscript in response to the reviewers’ comments. The revised manuscript and a point-by-point response are returned to the reviewers to ensure that their concerns have been adequately addressed.

Expand full comment

Tier 2 is just the point where the editor sends it to the reviewers who already were selected by the authors and agreed.

Think about it this way: if an academy member sends me a manuscript asking if I would be willing to review it, I'm unlikely to agree if I plan to reject it*. I was actually a little uncomfortable with the process such that I almost declined at the outset. But I thought the work was really good, so I figured it didn't cross any of my personal ethical lines to help them out then, even if the whole process still feels kind of gross.

*I don't know for sure, but I bet that if this happens, PNAS would just allow the author to select another reviewer to replace them and/or resubmit fresh.

Expand full comment

That’s an interesting point. Though I don’t understand-how do you know if you’re going to accept or reject the paper before you’ve read it? Or is that just based on the authors and the subject matter?

This should have a lot of measurable effects. First, the member contributions ought to be in general, lower quality papers (you could crudely measure this by comparing # of citations). Second, you could see how often people take advantage of that mechanism. Are there people who publish two PNAS papers per year?? It’s not Nature or Science, but it’s a very good journal and 2 pubs there per year could certainly sustain a successful academic career (thinking very superficially about what counts as academic productivity). There must be some social cost/norm against using the mechanism routinely, right?

Expand full comment

> Why do groups with no real difference between them look so different on the graphs?

Because the power spectrum is basically the Fourier transform of the original signup, and even for signals that are basically noise similar frequencies are highly correlated. This gives that "continuous" feeling to the curve, and makes it look like whether it's up or down, it can't be an artifact of noise. Noise is not nice and continuous!

It probably doesn't help that the plot's y-axis says the numbers are represented as z-scores, which exaggerates the differences between variables with relatively flat distributions.

Expand full comment

Is it fair to summarize the study setup like this:

1. State some research question and some hypothesis.

2. Collect some arbitrary data, unrelated or loosely related to (1).

3. Some p < .05 is taken as a proof for (1).

I am aware that the researchers typically use slightly different terminology (they probably don't write "as a proof for", but something a bit more blurred).

4. Bonuses: babies, cute animals, virtue signals of all kind

Expand full comment

It's not just "this study" that follows that plan. Studies following that plan are a well-known category of pathological science and probably the most prevalent kind at present. Scott is arguing that this study is in that category.

Expand full comment

Thanks, I wasn’t aware that it is so frequent.

One potentially interesting aspect of that pattern is the use of, let’s say, surrogate variables. If you study a new chemotherapy in oncology, you typically use surrogate variables at earlier clinical phases (e.g., tumor response, if the size has shrunk by some percentage, yes or no). Later, in the Phase III randomized clinical trial, you have to use a so-called hard endpoint. Often, this is overall survival (OS) that includes all kinds of death, not just tumor-related. I think the argument is that you wouldn’t be able to distinguish the reasons anyway (e.g., a suicide can also be caused by the tumor), and that any bias by unrelated deaths is cancelled out between the two therapy arms.

Now, we have tumors that grow very slowly, and waiting for an effect in OS is not feasible. The follow-up interval would last too long. Therefore, in some tumors, surrogate endpoints are accepted such as progression-free survival (PFS), meaning that the event time is the minimum of visible tumor growth, and death, whichever occurs first. The requirement is that there is a tight statistical relationship between PFS and OS, and a more or less obvious causal relation between tumor progression and death. And a lot of discussion and a consensus process at the regulatory side.

In the present study, we don’t have that. We may have some unreliable group difference in a dubious EEG measure between poor and rich children in some past study, which is of course far from a tight statistical relationship, and no obvious causal relation between that dubious EEG measure and cognitive abilities. I mean, the present example is just an awful neuroscience study, but more harmful things can be observed in other areas, with statins serving as a surrogate for heart failure, or plaques in Alzheimer, or telomers and aging. In the present kind of research, there’s no bureaucratic consensus process on what endpoint to choose; as a consequence, people can just pick whatever they like.

And, of course, the old problem, multiple testing and fishing for significance.

Expand full comment

But if it has the result of low income families getting a bit more cash to make things more comfortable, then I'm all for that.

https://nakedemperor.substack.com/

Expand full comment

Even if it comes at the cost of high income families getting a lot less cash, to make things a lot less comfortable?

Expand full comment

There's plenty to go around, it's just in the wrong hands.

Expand full comment

Not that much less comfortable. Diminishing returns and positional goods and all that.

Expand full comment

Yes? The marginal welfare value of a dollar is plainly smaller to someone making ~$100k+ than someone making ~< $25k. Taking a family from insolvency to the barest edge of stability creates much more happiness than it costs for someone to buy a Subaru instead of a Tesla or get Postmates twice a month instead of twice a week.

Expand full comment

I feel like the core problem of this perspective is you require there to be wealthy people to exist so you can take their money and give it to poorer people, but at the same time you are disincentivizing people to make more money because they have to give more of it away. This is less a problem for someone in a high-income bracket, but for some, they would rather get government support than work 40+ hours a week for minimum wage. So they become a net negative on the money pool instead of a net contributor. I've seen this in people I know living in subsidized housing. One person in the couple decided to not work because if he got a job, they couldn't stay in the apartment.

I think a better situation is to get the maximum amount of people working and push as many of those people into self-sufficient middle class as possible. You then get far fewer people relying on the government, and far more tax payers that can act as a counterpoint to the rich people at the top.

Expand full comment
Jan 28, 2022·edited Jan 28, 2022

I'm going to ignore for a moment that your argument is based on an anecdotal data point of n=1. The problem you're describing is misaligned incentives not lazy humans.

Yes, the wealthy (and the corporations) should pay their taxes and should not feel burdened by taxes aligned with helping their fellow citizens to live in better conditions. Perhaps, to paraphrase Orson's comment, they could drive one Tesla instead of one for every day of the week.

Expand full comment

I never said it was lazy humans. It absolutely is misaligned incentives, which is the entire problem. There are many cases of this, this is the most direct one that I have experienced.

Can you see the irony of needing billionaires to exist so you can tax them while simultaneously saying they are the very problem? The best way to solve poverty is to enable people to lift themselves out of poverty by giving them the tools and access to jobs they need to support themselves. The larger the middle class the more money we generate in taxes which we can use on supporting those who very much need it and putting it to good use on programs that advance society, like hard science research.

Expand full comment

Leftists would have fewer problems if they accepted that their moral intuitions are just intuitions, and committed to sticking by them. Instead they've deluded themselves into believing that their moral intuitions are backed by science, so they do a bunch of motivated studies, and when they discover that the ocean of the unknowable is vaster than they imagined, they deceive people about the results, justifying it on the grounds that "this will allow us to achieve a moral good".

You don't have to do that. You could just say that we should give poor parents cash because it's a moral good in itself. A study showing an increase in beta waves is not a prerequisite for it to be a moral good, nor will it serve to defeat the conservative counterargument about how this changes parents incentives or increases the national debt etc. All this does is diminish the reputation of science, in exchange for giving leftists the feeling that they are the smart ones and the good ones. That they don't have to think too hard about their beliefs because the experts have already concluded that the facts are on their side.

Expand full comment

I don't have much to add but I'm seconding this comment as a leftist.

Expand full comment

The problem is when you try to say there is an objective reason to do something, like brainwaves here, and then it turns out that is not a reason to do the thing, you hurt the cause of wanting to do that thing in the first place.

You're better off just saying you want to do that thing because you think it is morally right and make the case that way.

Expand full comment

This is an absolutely terrible attitude to have and I condemn anyone who holds to this. Poisoning the well is not good for discourse (shocker, I know). If you're allowed to spout off lies to get whatever result you desire, then, well, we saw where that led us, and were still dealing with the aftermath.

Expand full comment

The New York Times posted this study as the third most important news story on the NYTimes.com homepage and issued a tweet about it under the "Breaking News" caption.

This is a good example to keep in mind when reading Scott's next post about how the news media seldom outright lie, which I agree with. I'd be hard-pressed to find an outright lie in Jason DeParle's news story in the NYT.

On the other hand, the news media has a huge amount of discretion over what it treats as Front Page Breaking News and whether it approaches the story from a credulous or skeptical perspective.

For example, here's a two-week old study in "Developmental Psychology" of a study with a much larger sample size and a much longer duration that finds that the Democrats' idea of more funding for pre-Kindergarten education is not a good idea:

https://www.unz.com/isteve/is-pre-k-school-really-a-panacea/

Unlike the brand new EEG study, I haven't seen much news coverage of this.

Expand full comment

> But this study basically shows no effect. We can quibble on whether it might be suggestive of effects, or whether it was merely thwarted from showing an effect by its low power, but it’s basically a typical null-result-having study.

This treatment of statistical (in)significance is problematic. I've only looked at the charts and read Gelman's blog post, but it seems to me that the study produced evidence that was most consistent with some effect, but with enough uncertainty around that estimate that it's also reasonably consistent (though less so!) with no effect.

Statistical significance does not prove that some association is exactly as observed. But conversely a lack of statistical significance does not prove that no association is present. A confidence interval (or similar) around the estimate would be more informative, but to the extent that we're having to use p-values, a low-ish but >0.05 p-value might loosely be interpreted as saying the data are most consistent with there being some effect, but also reasonably consistent with there being no effect.

Expand full comment

I agree with this point, as well.

I'm more annoyed at the breathless and credulous media coverage of the study than the study itself, which seems like a potentially interesting if small piece in a large puzzle. Those are good and useful things to have, and we shouldn't just dismiss such studies as "a typical null-result-having study".

Expand full comment

Is it really interesting though? I mean, the EEG of the children of cash transfer recipients is not inherently interesting.

Expand full comment

Nothing is inherently interesting, but if the people with an EEG want to measure that, then they're free to.

Expand full comment

I probably should not have said "inherently". When I said it's "not inherently interesting", what I meant is that it's only interesting as a proxy for things we care about. (Such as the intellectual development of children) That's how both the authors and the gushing media pieces pitch the study. That's also the assumption in the comment I replied to which referred to it as "a potentially interesting if small piece in a large puzzle".

The whole research program is supposed to contribute to the question "Are cash transfers good policy?" And my point is that it doesn't offer a contribution to that question, so in that sense, it's not interesting. Of course, some weirdos with EEGs might be interested in what happens to the EEGs of the kids of cash transfer recipients.

As I snarkily put it when the study came out: "For centuries, policymakers have wondered how we could increase the gamma waves of poor babies. Thanks to EEG we no-longer have to rely on silly proxies such as the baby's overall health, their vocabulary development, subjective stress levels, etc..."

Expand full comment

I had a great statistics lecture in a course for physician-scientists. The professor asked us about two pilot experiments and to tell him which we were more excited about; ie which should be pursued further. One where there was a small difference but a very low p-value and another with a huge difference but a large p-value.

Obviously this somewhat depends on context, but despite many people being stoked about the low p-value, all that means is that you're extremely certain about a small effect. Meanwhile, a big effect with a high p-value is potentially very exciting! You just need to design a better experiment with more power.

For someone thinking truly probabilistically, there should be very little difference between the interpretation of an p=0.05 and p=0.0625; the difference in the information you've obtained from those two studies is marginal. In practice, especially in areas where great evidence exists a p value of 0.1 can be quite compelling.

We draw a line at 0.05 because we often have to draw a line somewhere, in a way that isn't post-hoc. There are some efforts to get around this (some journals publish "fragility indices" to say how many outcomes would have to go the other way in their study to make the result negative) but there's no consensus.

So while I agree with Scott that it's probably irresponsible to call this association significant if the statistics were done incorrectly, a p-value greater than 0.05 shouldn't be a magic cutoff in your mind (jimv put this really nicely above)! I'm also a little surprised the statistics were done incorrectly for a PNAS submission (PNAS is a great journal!) but maybe that's naive-tee on my part.

Finally, as i'm sure a lot of other people would point out, it's hard to know how meaningful a difference in beta and gamma waves are, and using a surrogate endpoint for something like this strikes me as imprudent.

Expand full comment

This is an important point! The data may not allow the researchers to reject the null at standard levels of significance, but they are still most consistent with a positive effect. Perhaps the headline should read: “Under-powered study may have found…. More research needed.”

Expand full comment
author

This is basically what I say in "Andrew Gelman finishes his article by warning us not to conclude that cash grants don’t affect kids’ EEGs. For all we know, they might and this study is just underpowered to detect it. That’s fine and I agree."

Possibly I could have been more careful here, I'm not sure. Is p = 0.49 "no effect" or "some effect but it didn't reach statistical significance"? P = 0.1? I think the way I used the term is basically fair but I understand why you might not.

Expand full comment

I'm probably most wary of the words in the first of your sentences I quoted: "shows no effect". Showing no effect would be quite a strong claim! Based on my reading of your article and Gelman's blog post (not the article) it sounds like the study wasn't powered enough to show no effect.

A slightly more careful wording would have been "does not show an effect". But given what the point estimates look like, I'd probably steer clear of that too, and aim for something like "does not provide compelling evidence for an effect" might be the way to go.

This isn't exactly the same territory as your No Evidence post from a couple of months back, but my brain pattern matches it similarly, and craves the nuance of saying what evidence this study is or isn't providing.

On the p-values, mathematically a p-value of 1 (in the typical case where it's calculated against a null hypothesis) means that the data is most compatible with no effect. That's going to equate to the case where the point estimate is of zero effect. Of course, even if that cropped up, it could be associated with a wide confidence interval, meaning that the data are also reasonably compatible with positive and negative effects.

When we're dealing with messy human beings (pretty much anything biological or social, I reckon), few effects are ever actually zero. Of course they could be small enough that they're not of any practical significance. And/or they could be highly heterogenous, being positive in some sub-populations and negative in others.

Expand full comment

Because they were essentially posted together, I don't mind putting this thought out here more related to your other post on Bounded Distrust.

Hardly anybody has the time to sift through the information as you did. As an aside, that's why I really appreciate you taking the time to do this. But, most people don't read your blog or other things to help sort through. Long story short, this study is accurately binned as "unproven, possible/likely false" and the NYT ran the story anyway. For most people, the conclusion *must* be that the NYT either flatly lied to them, or was so much more interested in pushing their agenda than the truth, that they were willing to forward a study that was very likely false in order to advance a narrative. For most people, the proper mental configuration *must* be to consider the NYT suspect. The only other alternatives are to trust in known liars, or spend far too much time sifting through information to try to understand and sort it, knowing that most of us lack the intellectual ability and time to do that properly.

Fake News is real, and there's no way for the average person to solve it. It must be fixed at the institutional level of the media who publish lies on a regular basis. Even those of us who can and do take the time to sort through information cannot let it slide that the media regularly lies to us, even in cases like this where they are presenting something "potentially" true. It's a weak study and should not be printed in the media.

Expand full comment

> For most people, the conclusion *must* be that the NYT either flatly lied to them, or was so much more interested in pushing their agenda than the truth, that they were willing to forward a study that was very likely false in order to advance a narrative

Alternative hypothesis: science journalists aren't all that bright, they aren't qualified to distinguish a good study from a bad study, and in any case they don't have time to look deeply into these things. If a study in a reputable journal shows something, they report "Study says something", and that's the limits of practical epistemology for them.

Expand full comment

Jason DeParle isn't a science reporter for the New York Times. He covers poverty and immigration.

Expand full comment

I'm not sure a defense of the New York Times should paraphrase to "they aren't good at their jobs so they'll share false material from their ignorance [as long as it confirms their worldview.]" I'm not sure that's worse than doing it on purpose, but it's still bad in society-wrecking ways.

Expand full comment

You should write a listicle by the title:

"Top Ten Cases of Nominative Determinism"

You won't believe number 7! (It's the neurologist Lord Brain)

Expand full comment

Haha, nice.

Expand full comment

Lord Adonis, on the other hand, is a disappointingly plain man.

Expand full comment

It's hard enough to do good science and be statistically rigorous even on "boring" topics like cell biology, physics, etc, when there's so much pressure to get positive results for the sake of your career. Take those career pressures and the normal desire to be proven right, and combine that with a politically-charged topic and a field where 95% of researchers have the same political ideology and....... well, that doesn't seem ideal.

This is why I think there's some case to be made for promoting political and ideological diversity in academia, at least in fields with politically-charged topics.

Expand full comment

There's a longitudinal study that took place in the Gambia, which used EEG & fNIRS as well as a battery of behavioural and socioeconomic assessments to look at the effects of poverty/malnutrition/low-SES status on brain development, and while it's not over yet, their conclusions are mixed (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6767511/). They pretty much did everything you could ask for with a poverty assessment study. And so far, they've found that Gambian babies seem to habituate to new stimuli much more slowly than their comparative cohort in the UK.

Is this because they have worse attentional performances or because they live in noticeably different households to UK children - massively higher average household size, more people to factor in etc.? For a study of this kind, it's not particularly enlightening? (Admittedly I think COVID impacted their work, and they should find out some more when the children return for a 3-5 year check-up.) But it is suggestive that certain types of neuroimaging have similar problems to heritability, where results don't translate brilliantly across geographic areas.

Expand full comment

"Andrew Gelman says no" could be the title of a whole book series

Expand full comment

I hereby request a new post tag

Expand full comment

You should read what he says about other stuff in PNAS!

Expand full comment
Jan 26, 2022·edited Jan 26, 2022

I now think journalists should be banned from reporting on science, as a matter of professional ethics. I think it's clear at this point that they just can't do it right, and I think this sort of bad science journalism is a leading reason why a lot of people distrust scientific institutions.

If they want to report on the state of science, maybe they should have maybe 3 randomly selected researchers from 3 randomly selected institutions write a review.

Expand full comment
deletedJan 26, 2022·edited Jan 26, 2022
Comment deleted
Expand full comment

I agree, that's why I said 3 researchers from 3 different institutions, all randomly selected, who would all have to serve as coauthors and come to an agreement on the content. There's still a small chance some weird stuff will get through if random selection happens to align just right, but that's still much better than the status quo.

Expand full comment

https://phdcomics.com/comics/archive.php?comicid=1174

Those reviews exist, but they're not what people choose to read. Professional ethics are a flimsy suggestion in the fact of market forces (i.e. public demand).

Expand full comment

> Those reviews exist, but they're not what people choose to read.

Those reviews aren't published in the New York Times and similar public outlets.

> Professional ethics are a flimsy suggestion in the fact of market forces (i.e. public demand).

The same can be said for all existing ethical standards in journalism, but some of those ethical standards persist despite market pressure.

Furthermore, given the decline of traditional media and the growth of independent media, I'd say the market forces aren't favouring the old ways much anyway, so this is a ripe time to try something different.

Expand full comment

>Those reviews aren't published in the New York Times and similar public outlets.

"New York Times and similar public outlets" is a complicated phrase. University PR tends to be public, and you can go read Nature.com or Medical News Today or what have you whenever you want. Those publications don't have a fraction of the circulation of the NYT, sure, but that is a result of their various stances in a relatively open media ecosystem. Salience is a result of content, not something handed down from on high.

> I'd say the market forces aren't favouring the old ways much anyway, so this is a ripe time to try something different.

Try whatever you like. But realize that is very different from a proposal to prevent other people from doing what *they* like, and that it is infeasible to replace the reading public with new models that would result in better incentives.

Expand full comment
Jan 26, 2022·edited Jan 26, 2022

> Salience is a result of content, not something handed down from on high.

Sure, and yet the NYT publishes pieces covering science all of the time, and I'm simply suggesting they use a new process when they do so to avoid the mistakes that got us into this mess.

> But realize that is very different from a proposal to prevent other people from doing what *they* like

I'm not suggesting we prevent people from typing and publishing the words they want, but to establish a new standard of behaviour whereby the current status quo rife with factual errors is as roundly and harshly criticized by all journalists, the way they currently criticize Fox News pundits.

Expand full comment

Without claiming that the NYT is perfect in its editorial strategy, within a first approximation we should expect a version of the NYT that matches the rigor of Nature to have the audience of Nature. That would be a significant economic loss for them, and it seems a fairly pointless one when any reader who cares can... just go read Nature.

The problem is not that good coverage does not exist, it is that the good coverage is not popular (and correspondingly, a smaller share of the market). It strikes me as quite difficult to coordinate a standard where journalists (or any profession, really) set a bar that a large majority would not meet. NYT coverage may be deficient, but it's not particularly bad.

Expand full comment
Jan 26, 2022·edited Jan 26, 2022

> we should expect a version of the NYT that matches the rigor of Nature to have the audience of Nature.

Who said pieces that match the rigour of Nature belong in the NYT, or that the pieces I'm talking about would satisfy the rigour required by Nature?

> it seems a fairly pointless one when any reader who cares can... just go read Nature.

Nature is much more expensive than the NYT, and it doesn't cover as many wide ranging issues which means you'd need to buy both anyway, which defeats the purpose of telling people to just read Nature. If you're going to write a science review for a widespread audience, which is the whole context of this discussion, you don't want it in Nature precisely because that's not the audience.

> The problem is not that good coverage does not exist, it is that the good coverage is not popular

I disagree, but even if that were true, journalists are not supposed to write only about what's popular, so I'm not even sure how that would be relevant; some content is just too important to worry about popularity contests, and I think science communication qualifies for all the reasons we see around us now.

> It strikes me as quite difficult to coordinate a standard where journalists (or any profession, really) set a bar that a large majority would not meet.

The only standard that journalists would have to meet is to know when to get out of the way and let real scientists speak without reframing them, selectively quoting them or presenting their words within a narrative. Doesn't seem that hard.

Expand full comment

One thing I would have liked to see the researchers address was the significant difference in age between the control and experimental groups. The research is fairly heterogeneous in this area, but from what I understand, the younger you are the less neural attenuation one has in various areas of the brain. Even if the average EEG collection varied by a month between two groups, that could contribute to differences in power seen here. One example is from this VEP study: https://www.cambridge.org/core/journals/visual-neuroscience/article/abs/development-of-lateral-interactions-in-the-infant-visual-system/FCBAC731A0B367404B116639A2C1757A. From the abstract: "We studied the development of the short-and long-range interactions at 100% and 30% contrast in human infants using both VEP amplitude and phase measures. Attenuation of the second harmonic (long-range interactions) was adult-like by 8 weeks of age while the strength of the fundamental (short-range interactions) was adult-like by 20 weeks suggesting a differential development of long-range and short-range interactions. In contrast, corresponding phase data indicated significant immaturities at 20 weeks of age for both the short-and long-range components."

Expand full comment
author

The significant difference in age is in the opposite direction from what you would expect to produce these findings, so it doesn't worry me too much.

Expand full comment

Obligatory typo nit: "but a bunch of other people beat me to it (see eg Philippe Lemoine, Stuart Ritchie) have beaten me to it."

Expand full comment

Stuart Ritchie is wrong about whether papers by National Academy members are peer-reviewed. In fact the review process is basically the same, it's the editorial process that changes, that is, the stage when the journal decides whether to send the paper to reviewers. NA members are given a certain number of "silver bullets" that allow them to skip the editorial stage and go straight to peer review. It is certainly bad and I think it does lead to bad science but to say that those papers are not reviewed is very much overstating the case.

Expand full comment
Jan 26, 2022·edited Jan 26, 2022

As I discuss in more detail in another post, the other major difference (and it's a big one that really calls the whole thing into question) is that the member pre-selects the reviewers and it's no longer anonymized.

Expand full comment

Good to know, thanks for that correction.I should also say that, anecdotally, some of the people I know who have this privilege have gotten some dubious papers into PNAS.

Expand full comment

This reminds me of TLP's fantastic post:

"In a recent fMRI study, a salmon was shown a series of pictures of human faces showing various emotions: can a salmon distinguish them? and what brain regions are involved. 15 pictures, ten seconds each.

I won't bore you with the anatomy. Because of the small size of the brain, exact brain structures could not be distinguished, but something in the brain did light up. A statistically significant number of voxels, comprising an area of 81mm3 in the midline of the brain, were active (p<.0001).

So can fish interpret human emotions from a picture? I have no idea. I do know, however, that that fish can't do it: it was dead."

https://thelastpsychiatrist.com/2009/10/the_problem_with_science_is_sc.html

Expand full comment
author

Last time I cited that comment someone in the field chewed me out because techniques have gotten better since then. I don't know enough about MRI to know if any particular imagining study is making the dead salmon mistake or not so I'm nervous about mentioning it.

(although you'll notice the tweet I linked has a picture of a bear catching salmon in it, which I think is a hidden reference)

Expand full comment

Once upon a time at an SSC meetup, I believe https://twitter.com/neurostats told me that issues identified in the dead salmon study were already well known to serious experts in the field prior to its publication, and if I understood correctly a lot of the improved applications/techniques were already available but perhaps less known.

(all mistakes here are due to my poor listening and not due to @neurostats)

Expand full comment

Yes, of course, all the serious people know that the field is fraudulent. But that's no defense! The serious people are either powerless or complicit.

Everything about the replication crisis is in Meehl 1967, which everyone serious read, but it took 50 years to be acknowledged as a crisis.

Expand full comment
Jan 26, 2022·edited Jan 26, 2022

"Cash aid to poor mothers increases brain activity in infants, study finds"

That headline makes me cry. I have to assume the story is better and that the study has something more going on than "We gave a wodge of tenners to Sharon and immediately her eighteen month old baby, Shanice, had a noticeable spike in brain activity".

I imagine what they mean is "By getting more money, the low-income parent(s) have less anxiety about finances, can pay bills on time, can feed their kids better, and the reduction in stress and improvement in the environment means that the babies are receiving better care and so we conclude better care = being healthier = more brain activity going on = hitting developmental milestones, smarter than if lower brain activity, and other good things".

I'll have to read the thing to see what it's about, but I'm going to predict this is what they mean.

EDIT: Mmmm. The study seems a bit wishy-washy, I don't know if they demonstrated what they set out to demonstrate, and they do seem to realise that. They're coming down on the side of "pro-cash transfers" but I honestly don't know if the extra money *did* make more of a difference:

"However, we do not yet know which experiences were involved in generating these impacts. Future work will examine potential mechanisms affected by the cash gifts, including household expenditures, maternal labor market participation, maternal parenting behaviors, and family stress, noting that pathways may operate in different ways across different children and families."

If you're not tracking where the money is going, and you don't know what is going on, you can't say "X shows Y happened as a result". $20 a month is not going to make a big difference, so the $333 is the one to track.

And the problem is: you can have neglectful parent(s) who use the extra cash to spend on themselves and the kids get no benefit. You can have parent(s) who are trying to do their best, spend the money on paying bills or buying food and clothes for the kids, etc. AND IF YOU DON'T KNOW WHICH IS WHICH, YOU DON'T KNOW JACK.

Is Susie Spendthrift's baby one of the ones with higher brain activity? If so, the extra money isn't the reason the kid is doing okay. Is Sally Striver's baby one of the ones with lower brain activity? If so, then despite Sally doing the right thing, lack of enough money is not the problem here.

Poverty is terrible, growing up as a child in a household where there is constant anxiety over paying bills and not having a financial cushion if anything goes wrong does make you anxious, and more money is probably better - but unless you know where best to direct that extra money, then your studies are not much use at all.

Expand full comment

> We gave a wodge of tenners to Sharon and immediately her eighteen month old baby, Shanice, had a noticeable spike in brain activity".

The internet meme culture got us covered.

https://external-content.duckduckgo.com/iu/?u=https%3A%2F%2Fi.kym-cdn.com%2Fentries%2Ficons%2Fmobile%2F000%2F035%2F549%2Fcover4.jpg&f=1&nofb=1

Expand full comment

A possible reason for "wanting" to find physical evidence of harm from poverty is that it is a holdover from a distant past when Liberals were "soft hearted" and Conservatives were "hard headed" and finding physical harm from poverty was supposedly a way to persuade Conservatives.

Expand full comment
Jan 26, 2022·edited Jan 26, 2022

"Some kids in Romania were randomly assigned to stay in (probably terrible) orphanages vs. be placed in foster care."

No, they weren't "probably terrible", they were "absolutely fucking appalling", at least going by my memories of Irish interventions in the 90s after the collapse of the Ceaușescu regime.

https://www.bbc.com/news/av/magazine-35944245

This sparked a lot of adoptions of children by Irish families (and other Western countries),and one unhappy side-effect was the setting up of a trade in Eastern European adoptions; canny operators in effect bought and sold babies to rich (by their standards) Westerners:

https://www.irishtimes.com/culture/cashing-in-on-the-baby-rescue-1.1058341

"The television pictures of Romanian orphanages and the children who lived there were among the most memorable of the 1990s. These shocking images sparked a huge humanitarian effort, particularly among the Irish. For many, though, the help they brought was not enough and they became involved in "rescuing the orphans" by adopting them.

However, these rescues unwittingly involved many Irish people in a baby trade. Most children were not orphans; they had parents and brothers and sisters and aunts and uncles and grandparents and these "rescues" were mostly facilitated by large sums of money. Many experts believe that, tragically, this trade from Romania condemned thousands more children to institutions and made reform of childcare almost impossible.

Today, Serban Mihailescu, the Romanian minister for children, says the effect of foreign adoptions was "extremely negative" and encouraged officials to keep the institutions full of children. "The number of children in institutions increased because more and more foreigners wanted to adopt Romanian children and more and more of the personnel in the institutions worked as dealers and they pushed the children for the inter-country adoption. It's like a business, a $100 million business," he says."

Expand full comment

The study started in 2001 so it was after the 1990s. Things have gotten better though they're still very bad. Then again, I've never seen a single country with a well run foster care or orphanage system. It's all varying degrees of bad.

Expand full comment
Jan 26, 2022·edited Jan 26, 2022

I can actually offer some operator-level expertise! I am a board-certified pediatric epileptologist, and can describe what EEG actually is and what it is purported to measure. And why this study is bullshit. I hit the comment length limit so this will have to be threaded out.

EEG is a test by which we indirectly observe the activity of the cortex (the brain's surface). We do this by gluing twenty electrodes to the scalp in a predetermined grid, attaching each to a differential amplifier, and then digitally recording the voltages detected by each electrode. The resultant tracings can tell us a lot about the state of a person's brain - are they awake or asleep, are they at an expected developmental stage, is their brain experiencing local or diffuse dysfunction, could they be having seizures? For many such practical questions, EEG is a useful tool.

With respect to many other questions, EEG is a blunt instrument. We sometimes call the EEG tracings "brain activities," but they are at a great remove from the brain's actual activities.

Edit: haven't posted since SSC days. Not sure about the etiquette for effortposting or how to make it look pretty. Correct me where I'm wrong.

Expand full comment

I'm just going to number these.

1. Electronic and mathematical challenges

Some of the remove stems from basic electronics. The brain's activities are sometimes on the order of millivolts when measured directly at the tissue, but by the time they travel through the cerebrospinal fluid, dura mater, skull, scalp, and scalp-electrode interface, they are typically 20-60 microvolts. Sources of artifact abound, so the EEG's signal-to-noise ratio is not always that great. And that noisy signal must undergo significant amplification. The usual caveats regarding the interpretation of an amplified and digitized signal weigh heavy on EEG.

Pertinently, 28% of the EEGs acquired in this study were unusable - meaning, they were acquired using inadequate technique. A technical inadequacy rate of 28% is not meeting any reasonable quality standard.

Some of that remove stems from the pure mathematics of EEG signal acquisition. The electrode's measurement output is a vector sum of the firing patterns of the neurons within its view. Neuronal firing patterns are wonderfully varied. The architecture of the cortex is fantastically complex. Neurons are firing thousands of times a second in different temporal and spatial patterns, and with different orientations. All of this affects what the electrode sees. Each electrode can "see" about six square centimeters of cortical surface. This degree of spatial sampling is enough for many of the common questions within EEG, but not enough to say you have observed the entirety of the electrode-facing cortical surface.

But moreso, the brain isn't a surface! The cortex is highly convoluted, and much of the processing occurs deep in the wrinkles and grooves. Surface EEG cannot access those wrinkles and grooves - and that is to say nothing of the brain processes performed by structures even deeper than the cortex, or portions of the cortex located in the deep groove between the brain's hemispheres.

Seen this way, an EEG electrode headset is two-dimensional array that has been conformed around a three-dimensional object. This creates an insoluble mathematical obstacle known to many disciplines as The Inverse Problem.

The inverse problem is best explained in terms of the forward problem. Imagine a lightbulb with a camera pointed at it, but with several obstacles between the lightbulb and the camera. Some of those objects are transparent, some are opaque, but all affect the passage of the light to the camera. If you know the lightbulb's characteristics, the obstacles' characteristics, and each one's position relative to the camera, you could calculate exactly what the photograph will look like. The problem might be difficult, but it is soluble.

The inverse problem is insoluble. Imagine instead that all you have is the photograph. You do not know the positions or characteristics of the obstacles or the lightbulb, but instead must use the photograph to determine the lightbulb's characteristics and its position. Unfortunately, the nature of the inverse problem is that a two-dimensional dataset cannot solve a three-dimensional problem.

Thus if we are to consider the mathematics of EEG signal acquisition, even if we have perfect trust in our equipment there are still many, many degrees of freedom in the analysis. Attempts at reducing those degrees of freedom via careful assumptions are all - qua Nassim Taleb - highly academic.

Expand full comment
author

Can you explain the implications of 28% of the EEGs being inadequate? What if they said "Yeah, but we threw those out, which means we had high standards, which means probably the remaining EEGs are fine"?

Expand full comment

An EEG lab that meets baseline standards will approach 100% success in acquisition. A technical inadequacy rate of 28% is abysmal, purely on the face of it.

I think I know how that happened. Greg describes their technique in his reply, and it seems they used EEG electrode caps, and that these were home recordings, and then they mined the EEGs for usable epochs.

An EEG electrode cap is essentially an elastic net with electrodes embedded within. The idea is that you don’t actually measure the head, you just slide the cap over the patient’s head like a beanie. In practice this leads to inconsistent electrode placement. EEG data are acquired via the difference between electrodes, and thus the inter-electrode distance will strongly affect the amplitude of the EEG data. If your inter-electrode distances are inconsistent, it will show up as inconsistencies in your band power. Also, with caps it is more difficult to keep the electrodes in place. Studies acquired by cap are uninterpretable because they are filled with artifact and you don’t even know for sure where the electrodes are.

Home recordings are notoriously prone to artifact. Because the patient is moving around a lot more, and because there is no technician to check the electrodes and repair the connections (or just to note when the array is disrupted), a home EEG often succumbs entirely to artifact.

So. When reading the initial description, I was a little taken aback that they would have a technical inadequacy rate of 28%. But when Greg said that they put caps on infants and sent them home, this became more understandable.

I would point out that a home EEG is acquired continuously for hours. I am not sure how long their acquisitions lasted (ours are 24-48 hours typically). But it is at least 6-12 hours presumably. Then they had to use machine code to mine usable epochs lasting seconds. I don’t want to overstate things and I would welcome some more description of these data, but, it may be that they analyzed only a few thousandths of some of these EEGs. And nearly a third of their studies could not even meet the standard of several-thousandths-able-to-be-analyzed.

And if they used machine code to find these epochs, it means that they didn’t look at the EEG tracings to determine what state the patient was in during those epochs. The power bands in wakefulness, drowsiness, and sleep are all very different.

Again, there is a plausible causal chain from the reduction of poverty to improvements in neurological development to measurable effects on EEG band power. I am 100% sympathetic to that idea and want that question to be asked. But there are so, so, so many confounders that arise from that question. I am extremely skeptical that EEG of such low quality would provide sufficient evidence to demonstrate those effects.

Expand full comment
Jan 26, 2022·edited Jan 26, 2022

2. Pathophysiologic and philosophical challenges

The final source of the EEG's remove from "brain activities" is the most fundamental : what, precisely, are brain activities? What is the brain actually doing? If a study purports to interpret the named frequency bands of EEG (delta, theta, alpha, beta, and gamma) in terms of behavioral and developmental correlates, this philosophical question must be answered.

Leave aside for the moment that we don't really know how the brain actually works, or what that work even is.

An EEG signal looks like squiggly lines, but just like any analog signal it can be described as a summation of component frequencies. Most often this is done visually by a trained reader (me, for instance) but you can also use an array of narrowly-tuned detectors to record each frequency, or use a mathematical operation called a fast Fourier transform to decompose the recorded signal into its likely components. We have named several frequency bands, according to observations of the EEG. Delta is the frequency band ranging from 0.5 to 4 Hz. Alpha ranges from 8.5 to 12.5 Hz. Beta ranges from 13 to 3 Hz. Gamma ranges from 30 to 80 Hz. Theta is whatever lies between delta and alpha - greater than 4 Hz, but lower than 8.5 Hz. (These were named in order of their discovery.) You can think of them like the "colors" of EEG - after all, colors are simply named frequency bands.

There are many theories about what functions or concepts these frequency bands correspond to. Some are purported to be "carrier frequencies." Some are thought to be resultant from interneurons or glial cells. Theta has been assigned mystical significance, even being enshrined in a System of a Down song. But we can have a high degree of skepticism because those frequencies don't exist the way they seem to.

Again, EEG is a vector sum of discrete microscopic events. Almost all brain activities are generated via action potentials - discrete, sharp, single-neuron firings lasting milliseconds. Moreover, because brain activities are so tiny once they reach the scalp-electrode interface, thousands of neurons must fire concurrently in order to generate detectable signals. If you consider that a delta-range waveform may be 0.5 to 1 second wide, and that the recorded voltage of a delta wave is orders of magnitude higher than the voltage of an action potential, you get a sense of how much neuronal activity must be vector-summed together in order to generate EEG waveforms.

Also notice that neurons are themselves discretized! Brainwaves begin as discretized events, which are spatiotemporally vector-summed such that they reach the scalp-electrode interface as an analog signal, which is then digitally acquired amplified and analyzed, before being decomposed into illusory component frequencies.

In that sense, "delta" and "theta" don't actually exist. Neurons do not generate delta waves - neurons generate action potentials. "Delta" and "theta" and "alpha" etc. arise from the blurred and distant appearance of the simultaneous firings of vast populations of neurons.

To sum up those challenges : EEG presents technical challenges of digital signal acquisition, mathematical challenges of spatial and dimensional sampling, and physiologic/philosophical challenges of intuiting what brain activities even are.

Expand full comment
author

I'm not sure what you want me to think here. If someone makes a claim like "relaxed people have more alpha waves", is that statement necessarily garbage? Or is it built on a bunch of philosophical abstractions but potentially true? If the second, how does that differ from what this study was doing?

Expand full comment

What I want you to understand is that the sentence “relaxed people have more alpha waves” is not even a claim.

The sentence is structured : given a particular behavioral state, a particular neurobiologic measurement will be at least this number of microvolts. That’s not a valid structure.

It’s like saying “curious people have heart rates that are odd numbers.” Even if you took a unit population and randomly assigned two groups to curiosity-generators and the curious people had heart rates that were odd numbers… you would not have started from a valid question. And therefore your observations would be meaningless. You would have no claim. You would simply have a weird correlation.

Others might come behind you and figure out the mechanisms underlying the odd-heart-rate-of-curiosity phenomenon. But they would have eventually noticed those mechanisms even without your help.

It isn’t merely the many layers of abstraction between the cortical signal generators and the ultimate appearance of the EEG. And it isn’t merely that we have so much left to learn about brain activities. The unknown is eminently workable in time.

The problem is that no valid question has been asked. Therefore no claim can be asserted.

This paper makes the same sort of assertions but far worse. They are trying to assert that there is a causal chain stretching from [impoverished family receives cash] to [infant EEG band power]. This would be a stretch if they could even demonstrate a firm grasp of EEG.

Expand full comment
Feb 13, 2022·edited Feb 13, 2022

Thank you for your insightful comments! If you have a bit more time, could you expand on how "relaxed people have more alpha waves" is different from the typical sanctioned use cases for EEG you describe below, e.g. "If part of the EEG has too much low-frequency activity, or too little high-frequency activity, this can identify an area of local dysfunction"?

What is it that makes the latter statement useful / meaningful, and the former meaningless? Structurally, they seem similar at first blush, at least to a layperson such as myself.

(I'm not trying to be contrarian, I'd genuinely like to understand this better.)

Expand full comment

Thank you for your kind words! Not contrarian at all.

The important thing to remember is that “alpha” is a sensor state, not a biologic state. Neurons make action potentials, but EEGs spatiotemporally sum action potentials in such a way that it looks like the thing we call alpha. Thus the structural difference between “relaxed people have more alpha” and “the EEG of a relaxed person may exhibit more alpha.”

This may seem a pedantic distinction. All things are sensor states. But this distinction is essential to EEG because of how an EEG must be interpreted. Each EEG channel is a depiction of spatiotemporally summed action potentials, formed via multiple layers of common mode rejection. Then we systematically ask how the various depictions differ from one another. Thus there are no true reference points in EEG, and every observation must be evaluated in proper context.

It’s a bit like asking “what is the center of the surface of a sphere?” That depends on what you are trying to do and what questions you are trying to answer. Are we playing pool, or are we on a polar expedition, or are you Queen Elizabeth?

The task of reading EEG is not simply pattern recognition - it is the careful management of multiple contexts. How old is the patient? What stage of wakefulness or sleep are they in? What is the purpose of obtaining the EEG? Which regions of the brain are being compared? All of these things are essential because they help us validate our observations. Asking the same question - is the patient sick? - from the multiple angles of frequency composition, state, age, and structure is how we construct a well-informed interpretation of the EEG, even when the EEG itself is shifting sand.

For instance : when a part of the EEG has too much low-frequency activity. This is necessarily a comparison to the adjacent EEG channels, and also a comparison to the corresponding channels on the other side of the head. Both of these serve as active controls for the channel of interest. It is also a comparison to the normal patterns of EEG organization. A healthy EEG has an anterior-to-posterior gradient, such that in an adult the frontal regions exhibit mostly low-amplitude beta activity, and the posterior regions exhibit moderate-amplitude alpha activity, with a smooth spatial transition in terms of frequency and amplitude. (In a child, the same type of relative gradient is present but the frequency composition will differ according to age). It is also a comparison to the norms for the patient’s age. It is also a comparison to the normal patterns of state. In drowsiness, the anterior-to-posterior gradient gradually melts away and more low-frequency activities arise, all as part of the gradual transition into sleep.

Alpha is a wonderful example. It is a frequency band defined by an EEG feature called the posterior dominant rhythm, which ranges from 8.5 to 12.5 Hz in most healthy adults. However, we observe alpha in many contexts.

Physiologically, “alpha” itself does not mean good or bad, healthy or sick. Alpha may be a normal feature of the posterior head channels in healthy adults - or it may be completely absent. It may be more diffusely present in the EEG of infants and children, albeit following that anterior-to-posterior gradient. If alpha is present in the frontal head regions of a fully-awake adult, it may mean that their brain is sick, or it may mean that they are just drowsy. We sometimes see discrete bursts of alpha-range frequencies at the beginning of a seizure. We may also see discrete bursts of alpha as part of normal physiologic transients. There is a pattern called alpha coma, wherein the entire EEG is swamped in monotonous alpha. When we see this pattern after cardiac arrest, it indicates almost certain death.

These are all things we may see in an EEG performed with good technique and proper spatial/frequency resolution. However, consider the ways in which the resolution of the EEG is further reduced, such as the fast Fourier transform. This mathematical operation takes a chunk from a signal and breaks it down into its component frequencies - it creates a set of sine waves that, when occurring simultaneously, would sum together and reconstitute the exact signal. This is a useful and almost magical technique, when applied to the correct question. However, breaking down a signal into those component frequencies removes important contexts.

However, brain activities are not made out of sine waves - they are made out of discrete, spiky action potential. Thus, a fast Fourier transform does not tell us the “ingredients” of the brain waves, it only tells us how to reconstitute the sensor state we call EEG. Improperly applied, it is a bit like reading the nutrition facts on the back of a food package and trying to determine from fat, carbohydrates, salt and protein what the food will taste like. Or, photographing a painting with a one-pixel camera, calculating the average values for red, green, and blue in the RGB color scheme, and then trying to decide from these numbers if the painting is any good.

Thus, the question “how much alpha is in this EEG?” eliminates all the important contexts, to the degree that it isn’t even a question

Expand full comment

Thank you for taking the time to elaborate in such detail! This really helped :) So alpha can mean various things depending on context, it demands careful interpretation, and throwing away the context to focus on just the amount of alpha is misguided.

My field is linguistics (which is partially why I was intrigued by your series of posts, because EEG sometimes comes up as a method, with ERPs and such), so an analogy that comes to mind: "relaxed people have more alpha" is kind of like saying "relaxed people write texts that have more vowels". While it may be technically true, without further context, it's not really a meaningful observation.

Expand full comment

3. Lots of validation problems

I am painting a rather skeptical picture of EEG. Don't worry - it actually is a highly useful tool, as long as you ask it questions that it is capable of answering. The frequency composition of an EEG, when compared to expectations for the patient's age, can tell us if their brain is sick. (Albeit very nonspecifically.) If part of the EEG has too much low-frequency activity, or too little high-frequency activity, this can identify an area of local dysfunction. And, when populations of neurons fire in an abnormally-synchronized fashion, this can help us determine seizure risk. (Granted, the interpretation of epileptiform abnormalities on the EEG is fraught with difficulty and some controversy, and is a topic unto itself.) So, we can certainly utilize EEG to the benefit of the patient and their healthcare providers. We just have to use it carefully. The careful use thereof has a fascinating history, reaching such strange locations as supreme court case law and double assassinations.

But, given all of the above, you may guess that EEG has not been well-validated in the type of question asked by Troller-Renfree et al. You would be correct!

Even from the standpoint that EEG is a useful tool, babies are not uniform creatures. Every infant is a little different in terms of their developmental journey, and a little different in terms of the appearance of EEG - and those two things don't even necessarily connect to each other in a reliable fashion. If you imagine that "developmental status" is a quantifiable variable (please suspend your disbelief) and that "EEG status" is a quantifiable variable (please suspend your disbelief), the means of those variables would have similar distributions and wide confidence intervals. Moreover, both developmental status and EEG status are determined by the infant's age as measured from the point of conception. Meaning, an infant born premature will look different, developmentally and on EEG, from an infant born term. These effects do not seem to have been accounted for.

This is to say nothing of the genetic and epigenetic drivers of developmental delay. Though it is difficult to conceive of a single study powerful enough to control not only for economic status and conceptional age at birth, but also the heritable contributions to an infant's developmental capabilities, it is safe to say that genetic risk factors are a lurking confounder. We may reach down into an even more fundamental level : do members of a family or kindred have similar appearances of their EEG at key points along their developmental timeline? No one knows. One would want EEG to have been validated for that question in light of all the aforementioned challenges, if it is to be relied upon as the authors of this study have done.

I won't comment on potential effects of cash payments. I see many patients in my clinic whose development stalls in the face of parental neglect. Often, I witness this from a happy direction : an infant or toddler with dense developmental delays is moved to stable foster care, and with consistent attention and nutrition they make extraordinary gains and often completely normalize! Unrelenting poverty is a potent driver of parental neglect. So, I am entirely sympathetic to the sentiment that enlivens this study. If we had such validated measures of developmental status, and those measures demonstrated the beneficial effects of material improvements to the infant's care environment, well, that is something I would be prepared to believe. I have seen it firsthand.

But we have no such measures. Moreover the degree of precision necessary to make such a dollar-for-dollar comparison would seem prohibitive (I won't delve into the statistical contortions as they are better-examined by others.)

In summary, this study is in my opinion a misuse and misinterpretation of EEG. Even beyond that, we should maintain an extremely high degree of skepticism that EEG can so precisely measure the effects of cash payments upon the developmental progress of infants.

Expand full comment
author

In what sense were age and prematurity not accounted for? It was an RCT, presumably both groups have the same prematurity, they discussed the groups' mean difference in age and although there was one it was small and the opposite of what would produce the effects in the study.

Expand full comment

And moreover, Greg Duncan points out that the babies were recruited from the same nursery population of term infants. So, that is my miss. Age and prematurity were accounted for via the random assignment to either the cash or no-cash group.

Expand full comment

4. A few purely-technical notes, having read their appendices :

I am skeptical of their filtration strategy, as it seems likely to induce frequency aliasing. Frequency aliasing is when a high-frequency signal is undersampled or overfiltered, and the resultant reconstruction appears lower-frequency than the actual signal. EG, a 5 Hz signal recorded with a sampling frequency of 8 Hz will be reconstructed as though a 3 Hz signal was recorded. The minimum sampling rate (or, the minimum low-pass filter cutoff frequency) is determined by the Nyquist equation : minimum sampling rate = 2 * target frequency. EG, if I want to record frequencies up to 5 Hz, I must use a sampling frequency of at least 10 Hz, or, I must set my low-pass filter cutoff frequency at least as high as 10 Hz. According to the body that sets our standards (American Clinical Neurophysiology Society), the low-pass filter cutoff frequency needs to be three times as high as the highest target frequency in the recording.

If they purport to record gamma as high as 45 Hz, their low-pass filter cutoff frequency should have been 135 Hz. Instead, they attempted to record frequencies as high as 45 Hz with a low-pass filter cutoff frequency of 50 Hz. The only frequencies they recorded with fidelity were maximally 25 Hz, and likely only up to 16 Hz. Moreover, some of their measured alpha and beta may be contaminated by frequency-aliased gamma.

I would like to know more about their acquisition equipment. Reading the previous paragraph, you might have the question : if frequency aliasing is such a pitfall, how can we possibly account for it without setting our sampling rates and cutoff frequencies extremely high? The answer is provided by a physical antialiasing filter, inserted between the electrode itself and the analog-to-digital converter. Its job is to remove enough of the unusable extreme high frequencies to permit digitization and analysis. The authors don't even describe whether they used one. Given the technical errors elsewhere in their acquisition, I would be unsurprised if they neglected to do so.

They acquired and analyzed their EEG in MATLAB, rather than industry-standard equipment. I have taken part in research studies that used things like MATLAB to record EEG, and the resultant studies were of uniformly awful quality.

They did not bin their frequency bands correctly. Theta should be 4 to 8 Hz, alpha should be 8.5 to 12.5 Hz. Instead, they have binned theta from 3 to 5 Hz, and alpha from 6 to 9 Hz. Their beta bin is reasonably correct at 12 to 19 Hz. But! This leaves the 10 to 11 Hz band completely unanalyzed. 10 to 11 Hz is alpha. They, did not analyze alpha whatsoever. Their disclaimer "Consistent with other infant studies" is stupid - this only serves to point out that other infant studies are similarly unreliable.

Moreover, we EEGers bin our frequency bands in increments of 0.5 Hz. They binned theirs in increments of 1 Hz. That is not the correct resolution, and given the analytic capabilities of MATLAB there are no technological excuses.

Expand full comment

Thanx a lot for your work here.

Expand full comment

Is it that hard to record higher frequencies? A computer's soundcard can handle tens of kHz but I suspect the voltages are higher.

Expand full comment

It’s not that hard. You just need to acquire the data at a high enough sampling rate to reproduce the signal with fidelity.

Our EEGs are routinely acquired at 1 kHz. In a typical study we are only interested in frequencies up to 20 Hz, so this is more than enough for fidelity. For intracranial EEGs we acquire at 2 kHz IIRC. I have seen research rigs that record at 10 kHz.

Expand full comment

Great write-up, thank you!

Expand full comment

You seem to have conflated Nyquist sampling frequency vs. cutoff frequency here. A 50Hz cutoff is fine to acquire a 45Hz signal which then must be sampled at >2x. This could be very close to 100Hz but would require a problematically sharp filter. 135Hz allows a reasonable order filter.

Not an EEG expert but I am an IC designer and although I specialize in power, I am involved in A2D converter design as well.

Expand full comment

I might be conflating those things. It has been a long time since my undergrad DSP, I am no longer qualified as an engineer! Appreciate you weighing in.

I will say that when reading an EEG, we see the same kinds of visual distortions from over-filtration as from undersampling. Our lowpass cutoff is set up with >3x limit for that reason.

So as an EEGer, I am concerned that their analyses would be adversely affected by their filtration paradigm, but given that I am relying on my experience with the visual analysis of EEG and this is a band power analysis, I might not have that technical point correct.

Expand full comment

Amazing post, thanks.

Expand full comment

Just wanted to say I really appreciated the effort you put into this and also wanted to thank our host for highlighting it in the latest OT

Expand full comment

Excuse the perhaps naive question, but as I understand EEG measurements involve a number of channels representing readings by a large number of electrodes. Could you clarify whether this spectral analysis is supposed to be a 1-channel thing or does it involve multiple channels?

Expand full comment

True questions are only good in proportion to their naïveté! Otherwise why ask?

The important thing to know about EEG is that in most situations, there is no “supposed to.” An EEG is acquired via a set of (usually) 20 electrodes, placed in a grid that is normalized to the patient’s head measurements with a few left over for referencing and for ground. (I am not sure how to post links here - you can google 10 20 system.) You would think that this is so we can record from particular areas of the brain, and, we do name the electrodes according to which cortical regions they are probably recording from. However, that would not be precisely true.

An EEG is interpreted according to common mode rejection - meaning, we compare a target electrode to one or more comparison electrodes, in order to get closer to the activity underneath the target electrode. It’s simple subtraction : target electrode minus reference electrode. If you want a math/circuits analog, google differential amplifier. What we look at when we read an EEG is the visual depiction of those comparisons between electrodes - each channel is created by subtracting an electrode from a reference.

The difficulty is that there is no ideal reference. Frequency bands and normal physiologic patterns may be localized to specific regions, and if you tie all your comparisons to an electrode with those physiologic patterns, they will contaminate your channels. For instance, a Cz reference may be effective in wakefulness, but Cz is where we see sleep patterns, and thus a Cz reference is useless in sleep.

There are standard ways to systematically set up your channels - in reality we don’t build a bespoke set of channels for each EEG, we utilize standard approaches called montages. Montage selection and interpretation would be fun to write about but I would go on too long for this discussion. Suffice to say that there is no perfect montage. Unusual patterns and thorny questions often require the use of multiple montages to sort out.

So, what we tell our trainees is that EEG is shifting sand - there is no firm ground to stand upon. There are only good questions and a set of useful techniques. EEG is a conformed and relative two dimensional dataset acquired from a mysterious three dimensional object.

To get to your original question, spectral analysis presents essentially the same set of options and difficulties. There is no such thing as recording power bands from one region of the brain. Instead, you set up a channel (or channels), acquire that relative signal, and then decompose the signal into its component power bands via fast Fourier transform. This type of analysis is subject to all the same constraints as visual interpretation.

And ultimately, there is no “supposed to” yet. There aren’t any standard spectral analysis montages=. Frequency-domain analysis has just not been around very long. Moreover, because it requires more computing power, people have tended to utilize very basic sets of comparisons. This makes it is a fairly blunt instrument at present.

Expand full comment

Awesome answer; I love this community! Thanks for the explanation!

Expand full comment

Rahien, my coauthors try to discourage me from engaging in these discussions, but you put a lot of effort into your post, so I will address some of the issues you raise and ask some questions, repeating the caveat that I am not trained as a clinician or research neuroscientist.

You cover two big themes in your long response – measurement (mostly) and population heterogeneity. I appreciate that there are measurement and data processing challenges. In our case, measurement challenges were compounded by our use of portable EEG devices in the context of the homes of our participants, rather than in laboratory environments. Despite that we were able to cap and gather data from 72% of the infants in our study. Our processing procedures were briefly described in the appendix to our paper but in more detail in the Developmental Psychobiology article that I mentioned in my post. Were you able to read that as well? Or have a look at the code that we have posted? We would love comments by technical experts such as yourself on how our methods might be improved. All of our data processing was by machine code rather than in some sort of case-by-case editing process. We developed and applied rules for when a particular one-second “epoch” of EEG data passed our quality threshold and have posted our code for those rules.

I find it ironic that after you list all of the reasons why electrical activity is so difficult to measure and understand, you go on to characterize EEG approaches as a “highly useful tool”! And then you go on to mention some of their applications for young children. If it is useful for those approaches, is it not possible that it is useful for others, including ours? If it is hopeless for our kind of research purposes, why is there such a robust research literature on infant and child EEG developed by neuroscientists? And, in particular, how can multiple EEG-based studies of infants and toddlers, cited in our paper, converge on findings such as correlations between power in certain frequency bands and family income? That correlational literature is what launched our experimental investigation.

I do not mean to imply that we have solved all of the measurement challenges you describe. But it is also important to understand that whatever measurement errors there are are balanced between our two randomly-assigned low- and high-cash groups. It is a well-established statistical result that random measurement error will not bias the treatment effect estimates we present in the paper (although it can decrease the precision of the estimates).

As to population heterogeneity, first a note: all of the children enrolled into our study were admitted into their birth hospitals’ well-baby nurseries. So almost all are full term and are not presenting with many of the developmental problems that you see in your work.

That said, we couldn’t agree more that “every infant is different,” as are the family conditions in which they live. But the beauty of random assignment is that these infant and family differences are balanced across infants in the high- and low-case gift groups and do not bias our impact estimates.

Let me close by repeating that our claims in the paper are about EEG differences and not differences in the kinds of thinking and learning capacities that we care most about. Our follow-up at age 4 will provide a much more complete picture of group differences in those kinds of capacities.

Expand full comment

Your coauthors are entirely correct about healthy boundaries, and I would similarly discourage you from engaging in this type of discussion. There are three or four ways this could go down. Only one of them is good. I’m not up for threading that needle for you.

Utilize the normal channels of peer review, do your own damn homework, and forget the internet.

Expand full comment

So much for my hopes that a civil set of comments and questions about your post would lead to a discussion.

Expand full comment

I’m not sure how to actually take on board your point about not over relying on heuristics and dismissing any study you don’t like. I mean, I agree in principle. But when I saw this headline I immediately said to myself “this is not a real finding and I would bet a very large sum of money on the spot that the paper has obvious flaws and won’t replicate”. And I was right, as I knew I would be.

So how do I remain epistemically virtuous here? The heuristic just works too well to truly ignore it.

Expand full comment

I really want to know what goes on in these academics minds.

The two poles

A) We think we can see a real effect here, if only we pre registered it, maybe we should publish anyway it will add to the sum total of human knowledge.

B) We are in the money with this, let's p hack the hell out of this study so we can get headlines and future grants.

Obviously both are extremes, but I have gone from thinking academics were mainly A, to realising quite a few are a bit like B.

Expand full comment
Jan 26, 2022·edited Jan 26, 2022

You don't need sinister intent to get B. If you're looking at a maybe-effect and trying to figure if it's real/publishable, or thinking whether a bit of statistics fudging is more legit (like excluding outliers which are obvious technical bugs) or blatant p-hacking, the background thoughts of "if I don't do this funding will dry up/my student won't get a good job and if I do this people will really like it" influences your thinking more than we'd like to admit.

My way of dealing with it is describing my process explicitly to someone with no vested interest and who cares about scientific rigor, so they can be my Reviewer #2 and say stuff like "but is there actually a good independent reason to exclude these datapoints or do you just not like them?". But you don't always have someone like this around (when I don't I just use Putanumonit's angry ghost instead)

Expand full comment
Jan 26, 2022·edited Jan 26, 2022

In the original post, Scott wrote:

"Most of these families were making about $20,000, so this was an increase of about 10-20%."

This is only partly true. From Table 1, "Characteristics of EEG Sample", "Household combined income at baseline (dollars)":

$22,739 ± $20,875, n=238 for Low-cash gift EEG sample

$20,213 ± $14,402, n=168 for High-cash gift EEG sample

That is a great deal of intra-group variation. At face value, a $4,000 per year supplement will mean far more to a family one SD below the mean (69% boost to income) than to one at +1 SD (12% boost).

It seems to be commonly accepted that low-income mothers' behaviors are influenced by the knowledge that benefits will be reduced by higher declared income -- this provides incentives to participate in the informal economy.

Perhaps the authors figured that for that reason, declared income doesn't mean much; "poor is poor." I think the journal should have required the authors to address this point explicitly (I don't believe that they did).

Expand full comment

Interesting tie in to Scott's comment on hating imaging -- computer vision has a terrible time with undetectable-to-humans changes to small numbers of pixels in images. In effect, all image analysis by both humans and complicated statistical tools are very very hard due to exactly the high degree of freedom that Scott points out. Unfortunately, this is also why a bunch of very good compression and storage techniques for signal processing and lowering cost on medical imaging technologies are looked at with so much skepticism, per a friend of mine studying some of that in his EE PhD at UC Berkeley.

Expand full comment

From the study: "The BFY study will continue to follow these children through at least the first 4 y of life, to determine whether treatment impacts on brain activity persist and extend to direct measures of children’s cognitive and behavioral outcomes."

My guess is it won't for the reasons you laid out. Maybe you could make a prediction on it if someone disagrees.

Expand full comment

Did not Dick Armey teach the masses thusly: "You tell me who did the study, and I'll tell you what results they got!"?

That applies to all sides, FWIW.

Expand full comment

> And finally, people want to discover a link between poverty and cognitive function so bad.

This is baffling to me. The link is OBVIOUS. People with lower cognitive function have less ability to generate wealth in society. This is like 95% of the way to the definition of generating wealth. This is almost as tautological as saying "people want to discover a link between poverty and being poor". Yes, being poor _causes_ poverty, almost definitionally. In the same manner, in an information-age economy, being information-literal is almost definitionally what it means to be capable of generating wealth.

Expand full comment

You're not even willing to entertain that a reverse causal relationship exists as well? That early deprivation, lack of access to opportunity, poor education, all affect future cognitive function?

Expand full comment

The power of the just-so story of poverty, where it's a moral punishment inflicted on the scum of society, is hypnotically-strong. Just look at all the fine rationalists in this comments section who are more willing to suggest a soft democide against the poor than actually trying to make systemic improvements to their lives- because that might mean that some of the taxpayer's (e.g. their) money might go to the "undeserving".

Expand full comment

You keep insisting that it is about 'deserve' or 'doesn't deserve' even after like, a dozen people trying to patiently explain that no, they are far more concerned with incentives than anything else. When you continuously refer to an argument that people have disavowed, an argument that makes them seem evil, even after it has been pointed out that this is not an accurate representation of what your opponents believe, it makes it seem like you are arguing in bad faith. Please stop.

Expand full comment

Except the genetic argument (which was being made in the top-level comment by implication) is not, point in fact, about incentives at its root, it's about the idea that:

1. Poor people are poor because there is some genetic factor that leads to their poverty, and

2. The effect of this genetic factor is so strong that any large-scale societal programs to help with poverty are at best a waste of time and at worst encouraging people with bad genes to have more children, creating a dysgenic effect on society at large.

Which leads to the conclusion that either you can do nothing to fix poverty, or poverty can only be fixed by reducing the population of the breeding poor.

The argument that this is purely about incentivizing better behavior is facetious.

If you don't believe the above, than I'm not arguing with you.

Expand full comment

Absolute EEG power is a very odd metric and doesn't really mean much from my understanding

Expand full comment

An interesting note is the distribution of parental incomes. The average was around $20,000, but the range was anywhere from about $4000 to $40,000. Getting an extra $4000 a year on a $4000 annual income should have a much larger effect than an extra $4000 on a $40k annual income (that at least sounds good on paper). I dont belive the authors of the study ever broke the children down into groups based on parental income levels because that wasn't part of the pre-trial design. But, to give them a generous interpretation (which I don't necessarily think they deserve), maybe the effects are more pronounced at lower income levels / for higher relative income changes? Maybe $60/mo (14.4% increase on an annual income of $5000) has a bigger impact that $330/mo on a $40,000 dollar salary (10% increase)?

Expand full comment

Is it an important critique that PNAS has weird peer review? Does Richie systematically talk about this, or only when he objects to a paper?

We should keep track records. PNAS was one of the most prestigious journals in the world when it did not do peer review. Why was this? Was it because the members of PNAS put skin in the game by personally endorsing papers? But the response to its success was to end the experiment and prevent it from generating more data. It was bullied into adopting peer review. And no one believes the journal wanted it, so no is sure what they're really doing. Richie continues the bullying today:

'"Contributed" submissions do get peer-reviewed - but there must be SOMETHING easier about this way of submitting articles, otherwise why would it exist? My guess is that the Contributor's handpicked choices for reviewers are almost always granted:'

Or, maybe people submit to PNAS for the obvious reason that it is one of the most prestigious journals in the world. We shouldn't care about whether it's "easy" or "hard," but whether it differentiates truth from falsehood. This paper should be a black mark for PNAS, but we should study its whole track record. We should judge the method by the results of the journal, not the journal by the method. I don't know if PNAS deserves its prestige, but it definitely shouldn't have its prestige be a function of its method.

I think anonymous peer review has been a catastrophe, but I am much more certain that experiments are good. We should test the hypothesis of anonymous peer review. Experiments are *obviously* good and the homogeneity of peer review is *obviously* bad.

Expand full comment

"I think anonymous peer review has been a catastrophe."

This is a very unusual take. While I sometimes get upset with reviews I receive, I'd say on average, they make my papers better, sometimes saving me from real embarrassment even.

Expand full comment

I second this sentiment. I’m usually initially really upset and angry with a review the day it comes in, but after taking time to cool off, the criticism is almost always reasonable/something I would see too if I wasn’t so close to my own work.

Expand full comment

Just saying that anonymous peer review is not eternal is such a radical take, it hardly matters whether I think it made things better or worse. Your response makes me skeptical you noticed my claim.

Feedback from editors and rewriting is not new. Your personal response to peer review might be relevant if you could compare it to your personal response to the system without peer review. People who lived through both systems and chose to record their comments were very negative about the new system. That is a real cost that must be weighed against the putative benefits, but the point of the system is not to be fun (although some would say that this is where we go wrong). I was not at all talking about this cost.

The point of the system is to filter papers. It is on these grounds that anonymous peer review has been a catastrophe.

Expand full comment

Systemic trauma still a little too touchy to shine light upon.

Expand full comment

I'm slightly confused by the argument with the random graphs. Is the point here that those are random in a way that makes their average proportion of disadvantaged children basically the same? Because surely otherwise you would expect to see differences depending on which random group has more of those?

Expand full comment
Jan 26, 2022·edited Jan 26, 2022

The study authors made all their data publicly available[1] so Gelman replaced the "control-vs-experimental" group into a post hoc "heads-vs-tails" group, meaning the experimental effect should be completely wiped out: both groups have a mix of control and experimental.

Instead, he saw big effects on his graph. Weird!

[1] This is a very good thing that not enough authors do (because it lets outsiders disprove their results) and should be strongly encouraged.

Expand full comment

But why should the experimental effect be completely wiped out? Or is the sample size so large that all groups have basically the same number of disadvantaged children?

Expand full comment

> But why should the experimental effect be completely wiped out?

I give a drug to 500 people and a placebo to 500 people.

Then someone steals my notebook of who was in each group, so I flip a coin to put each participant into one of the two groups.

I shouldn't detect anything. If results continue to leap out, something's weird.

Expand full comment

Shouldn't you see a weaker version of the signal? Suppose everyone in the control group died, and everyone else lived. Say group A has 240 placebos and group B has 260. Group A will show 20 more survivals than B. Less than 500, but still a signal. Is there an obvious reason I'm missing why this signal would be deemed non-significant?

Expand full comment

Yes, the signal would be deemed non-significant, with high probability. That's because the null hypothesis (that A and B have the same distribution of outcomes) is *true* when you randomize like that. Whatever statistical test you do to try to distinguish the groups, there's a 95% chance you'll get a p-value greater than .05.

To restate Edward's point, if you randomize and your statistical test is showing p < .05 more than 5% of the time, something is wrong with your test.

Expand full comment
Jan 27, 2022·edited Jan 27, 2022

What's that a signal of, if you don't know beforehand? Looks like there's no significant difference between the groups, and you don't know if your drug did it or about half of the people get better on their own and the drug does nothing or what -- we don't even know if group A had fewer or more placebos; maybe the drug is harmful.

In other words, randomized groups should have results that look like chance, barring some failure in randomization.

It's easier to see and more true-to-life if you think more like "cures a headache" than "saves life" for the effect we're looking at, and, say, a 75% success rate rather than 100%, and some random number between 10 and 60 get better anyway.

Group A has 221 headaches cured, and B has 219. Is this chance? Did the drug help or hinder? Which group had more of the drug?

(In "reality", it helped, as we know -- Group A got 260 doses and Group B got 240; 195 were cured by the drug in A and 180 in B, but I rolled a 39 for "reversion to mean" for B and 26 for A. But eyeballing the data doesn't tell you if this is the case, or if the drug prevented people from getting better 10% of the time but the reversion to mean was much higher, or what.)

Expand full comment

This is fake p-fishing science plus fake cherry-picking journalism. Later, you will see NYT news stories and editorials lobbying for some social spending program on the ground that "studies show . . ." They have been doing this forever.

Every time I see the NYT or some other lefty fake news outlet say "studies show . . ." or "experts agree . . ." I just roll my eyes and assume it is fake.

Expand full comment

Just curious, why are the two groups in each EEG plot always vertical mirror images of each other?

Expand full comment

I agree that the methodology isn't great and it is actually a null result but kudos to the researchers for making the data public! I don't think we should be harsh to the researchers there. It would be easy to tell a tale that the data is too sensitive to release and I bet PNAS would have still published it.

This is also a good demonstration of why data + public release >>> Peer Review for advancing Science.

Expand full comment

What Henderson points out is absolutely damning to me, and shows clear evidence of "rooting around" for an effect. It really does make me sad that this type of activity has got to be understood by those running studies that it is antithetical to getting a useful result, and they do it anyways, and most people don't care, it seems to be almost a norm.

Expand full comment

One piece of advice I give to young scientists looking for projects is that they should pick a project that gives interesting results no matter what.

The worst kind of project is the one where you're looking for a link between X and Y, and it's interesting if you find one, but very uninteresting if you don't. Once you've spent months of your life and hundreds of thousands of dollars of grant money finding children and sticking them in EEGs, if you _don't_ find any interesting it's going to be awfully tempting to keep massaging the data until you do. And that's even without the political biases.

Expand full comment

Has no one even considered the possibility that the cash grants to poor families might have been used to purchase alcohol, tobacco, drugs and lottery tickets for the adults? Which would be unlikely to have any positive outcome for the children. I might be accused of bias in this assumption, but an assumption that the money would only be used to improve the physical health and learning opportunities for the children also reflects bias. In high GDP countries many people are poor, not because of external factors, but because of poor life choices. Giving money to such individuals does not magically result in better life outcomes, whether its welfare or lottery winnings.

Expand full comment

The inherent idea behind welfare is that parents do, in fact, care for their children, and free riders of the kinds you describe will be a minority and not the majority. Your suggestion isn't a revolutionary idea, it's just the same tired welfare-queen crusading that has been used to argue against giving even the scraps from our tables to the poor since that became a THING.

Expand full comment

I'm not talking about free riders; I'm talking about poor life choices -- which can be found throughout different economic classes. You apparently believe that the poor are all powerless victims of external forces and that they have no agency; but if you had more personal knowledge of low-income people you would realize the diversity of both their circumstances and their nature. As someone who was literally penniless at 41, and able to retire at 63, without handouts or luck or special treatment, I probably understand better than you the importance of self-reliance and making the right choices. Also you need to appreciate the plentiful opportunities afforded to almost everyone in western countries as opposed to their lack for most people in non-western countries. This is why so many immigrants are able to arrive with nothing and achieve so much, because the energy needed for mere survival in their home countries is enough for significant accomplishment here.

Expand full comment

So your theory is that parent who are profoundly poor somehow manage to keep their children fed and clothed despite their poverty (we aren't finding dead kids everywhere). But, if we give them slightly more money, they will spend 100% of it _not_ on their kids?

Expand full comment
Jan 28, 2022·edited Jan 28, 2022

I don't think they are saying that, and I certainly hope they are not saying that. But there are people who know exactly how to game the system - when the social worker arrives for the monthly check-in, they have the kids drilled "Say everything is fine, or else". For that one visit, the house will be clean, a cooked meal on the table, and the kids washed and dressed. Once the social worker writes up the report "Mary is doing much better looking after the kids" and is gone, then it goes back to "feed yourselves, I'm going down the pub".

*This happens*. I know it from previous jobs, I know it from being lower class all my life. NOT every poor or working-class person, but a persistent minority.

I can't give details, as I've said, but there was one egregious case where I swear, if I had a gun, I would have shot every bastard of an adult involved because the kids would have been way better in foster care, even if foster care is poor to bad. They really, really were getting an extra allowance for [reason] and then not spending it on the kids, who were suffering - and I have an awful belief at the time and since, they were glad to have them suffering, because obviously miserable and deprived kids were a great visual aid when holding out their hands for "we need more money to look after them".

NOTE: Before anyone goes off on the racial angle, I'm talking about white Irish people who were probably even lower middle-class, if you want a measure of socio-economic strata. Not the lowest class, but they were wringing every penny and every advantage out of the system that they could, at the expense of the kids involved.

Expand full comment

Poor life choices often result from a culture of poor life choices. If your parents and neighbors only modeled poor life choices for you, there's a good chance that's all you know how to do (and that there is a rationality to short-term gain choices when you are in poverty):

https://www.theatlantic.com/business/archive/2013/11/your-brain-on-poverty-why-poor-people-seem-to-make-bad-decisions/281780/

Changing culture is incredibly difficult, but if we're really going to solve poverty instead of band-aid it, we need to address the culture change. (But I am still in favor of the band-aids until we can figure out how to solve the root problems).

Expand full comment

I understand why you're getting mad, Essex, and yeah that would be a flippant and callous way to ignore the study.

Except that isn't just "tired welfare-queen crusading"; I saw it happen in real life. Most parents, single parents or not, do try their best to do good by their kids. Women who are junkies or where the father of their kid is a junky will try and clean up when they have a baby to look after, and if Dad won't at least try, they realise he's bad news and dump him.

But some people are greedy, selfish and horrible, and will use their kids as meal tickets. I've seen cases of people using kids to get extra money and then they spend the money on themselves rather than the purpose for which it was meant. I can't talk about specific details because I'm still bound by confidentiality from the previous job, but there is a reason I am very damn cynical about human nature.

The study never measured this one - you would imagine! - vital point about poverty reduction: where is the money going? If it's going on better food, then nutrition has a huge role in "increased brain activity". If it's going on paying rent, then "reduction in environmental stress" may be the reason, and the reason (if we don't want to think about it because we don't like the conclusion) is that parents who are less stressed are not going to be short-tempered and irritable with their kids.

If the parents are buying booze, weed, and lottery ticket with the extra cash, we need to know that too. Because right now, all we have is a graph saying "more money means increased brain activity" but we have no idea did *some* of the increased brain activity happen in kids where Mom used the extra three hundred a month to buy treats for herself? We have no idea did *some* of the lower brain activity happen in kids where Mom used the extra money to pay rent and buy them new clothes.

So the study is about as much use as a hole in a bucket, going for research into "effects of poverty reduction".

Expand full comment
Jan 28, 2022·edited Jan 28, 2022

I've seen people dead with a needle in their arm while their kids are sitting in their own waste one room over, Deiseach. I don't need you to tell me that horrible things happen in this world. I still don't believe dismissing the suffering of the poor and refusing to help them because shitty people exist among them is justifiable on either practical or pure-moral grounds. I'm not debating whether this study is good or not, I'm debating whether CHARITY and COMPASSION are good or not. Given your previous statements, I'd assume the former, but people never fail to surprise me on this site. Other people in this very comment section seem to believe these things are bad, at least when directed at the unwashed low-income masses, and believe that instead of pure welfare we should make welfare contingent on sterilization. I reject the genetic theory of poverty or the idea that a minority of bad apples should spoil the whole barrel- especially because the latter can and does form a fully-general argument against even minimal kinds of charity ("Why should I give a dollar to that homeless guy on the street? He'll just spend it on drugs. Why should I donate to the food bank? They're just scams and tax-dodges, nobody who actually NEEDS the food will even get it? Why should I call 911 for the addict overdosing in the alleyway, he probably mugged someone for the money for heroin, what goes around comes around.")

If you don't believe in either of those things, I don't have a real quarrel with you.

Expand full comment

(Literally, not metaphorically, pulling up my sleeves to type this).

First, if we're going to have a dick-measuring contest over "who had the harder life/better experience of real life poverty", I can pull PERSONAL LIVED EXPERIENCE out of my life, but I don't have a dick so I'm not interested in this kind of contest.

Second, if you took away from what I said that my meaning is as per the quote below, that is a mistaken impression:

"Are there no prisons?" asked Scrooge.

"Plenty of prisons," said the gentleman, laying down the pen again.

"And the Union workhouses?" demanded Scrooge. "Are they still in operation?"

"They are. Still," returned the gentleman, "I wish I could say they were not."

"The Treadmill and the Poor Law are in full vigour, then?" said Scrooge.

"Both very busy, sir."

"Oh! I was afraid, from what you said at first, that something had occurred to stop them in their useful course," said Scrooge. "I'm very glad to hear it."

I am not saying "don't give welfare, don't give money", in fact with all this talk of extra child tax credit I don't understand why the US doesn't just have children's allowance:

https://www.citizensinformation.ie/en/social_welfare/social_welfare_payments/social_welfare_payments_to_families_and_children/child_benefit.html

Thirdly, this study is no damn use for what it is trying to achieve. If you want to work on poverty reduction, and you use a study that goes "giving parents money causes higher brain activity in infants", then the first question anyone is going to ask is "I am very glad to hear it! What was the most useful thing the money was spent on, so we can give more money for that?"

And then you have to come back and say "I have no idea, it could be the God's honest truth that some parents spent it on booze but the kids still had higher measured brain activity".

And then either the other person goes "Cool, was that beer or spirits?" or they think you're a ninnyhammer who wants to give people money to buy drink instead of baby formula.

If you want to solve a problem, then you need to know the parameters. For all the good this study does, they might as well have gone "We went out to the woods at midnight to see the pixies dance in the fairy grove and then when we measured the effect on the babies, their tiny brains all lit up with joy!"

Expand full comment

Right, I have no quarrel with you then.

Expand full comment

"Has no one even considered the possibility that the cash grants to poor families might have been used to purchase alcohol, tobacco, drugs and lottery tickets for the adults?"

First thing that leaped to mind, and they don't seem to have explored that path.

Contrary to certain popular wisdom, low-income families in general do try and care for their kids. Contrary to certain other popular wisdom, there are some horribly neglectful parents out there who would do exactly that with extra money and leave their kids hungry, cold, and dirty.

If the study is not looking at "where does the extra money go?" then it's no real use in "we think giving more money helps babies, but since we couldn't bother our arse to ask parents about their outstanding bills, we are only guessing despite going through the ritual performance of sticking wires to babies' heads and drawing pretty graphs".

Expand full comment

The debunking has focused on half of this study, the link between gibs money and EEG.

But what about the other half -- the link between EEG and cognitive performance? Is there any indication that cognitive performance is meaningfully measurable with an EEG (outside extreme cases where the brain is severely malfunctioning)? Can we save a whole lot of money on IQ tests by just hooking people up to EEGs to see who the smart ones are?

Expand full comment

To try to invent reasons to do one thing as opposed to doing another, or more correctly to make others do the one or the other, invariably creates false science and counterproductive policies. Such being the case of war against poverty, war against drugs, war against God or war against fossil fuels, to name only the recent stupid political abuses against human rights.

Expand full comment

As soon as i say this blogger is cool he turns into a mega pussy again. WTF?

Expand full comment

The obvious solutions are: stop commenting on the author, or read only every other article.

Expand full comment
Jan 26, 2022·edited Jan 26, 2022

"the lead author is named Dr. Troller, and I am a nominative determinist"

OMG, me too! It's crazy how often names match occupations or reveal hidden truths. Of course Bernie 'made off' with all your money!

But it can also be a limiting bias. I was overly skeptical of Operation Warp Speed just because the lead doctor's name sounds like "slow-ee."

Or maybe not? From Dec 2020:

"Moncef Slaoui, chief science adviser for Operation Warp Speed, said during a media briefing Wednesday. 'The process of immunizations — shots in arms — is happening slower than we thought it would be.'"

https://www.nbcnews.com/health/health-news/slower-expected-covid-vaccines-are-not-being-given-quickly-projected-n1252225

Of course he would say that!

Expand full comment

The "low cash" and "high cash" lines in the first graph looks too much like mirrors of each other - eyeballing it, it looks like score(low cash frequency) ~= -0.8 * score(high cash frequency) for every frequency. But Andrew Gelman's results look the same, so what's going on? And if the graphs really are supposed to look mirrored, why are they both present? Isn't that going to make any differences look twice as much as they really are?

Expand full comment

> In order to trust their positive results, the researchers had to correct for multiple comparisons. The simplest method for this is something called Bonferroni correction, which would have forced them to get a p-value of 0.05/8 = 0.00625.

This looks like it should probably say 0.05*8 = 0.4?

Expand full comment

No, I think the original version was correct. Testing multiple hypotheses means you need a smaller p-value to be confident you're not staring at noise.

Expand full comment

Oh, I'm interpreting it backward. Bonferroni correction would have required them to get a p-value of 0.00625 [in order to remain significant after correction]. Whereas I was thinking "Bonferroni correction would force them to get a [corrected] p-value of 0.4 [from an unadjusted 0.05]".

Expand full comment

It seems that there are almost innumerable confounding variables between the guardian's receipt of hundreds of dollars and the child's brain scan.

Expand full comment

I love how this and the Bounded Distrust post came out within 24 hours of each other. Is it some commentary that media will report "scientific study finds X" when the truth is "scientific study fails to reject the null hypothesis but scientists still think X" when the media really believes and would like to push X?

Expand full comment

"poverty is obviously bad (...) If you don't agree poverty is bad, you've never experienced it." so my first instinctive reaction was to agree with you, and then because i have this need to try to contradict everyone i went looking for a reason why this statement may be wrong, and the analogy that came to mind is with pain. I also thought pain was obviously bad, until I saw a documentary about kids born with analgesia and lightbulb went on that hey maybe that bad thing serves a useful purpose and your instinctive feeling about it being good or bad is unreliable.

The parallel with poverty at individual level would be the kids of rich people who are born in an environment where they can't ever conceive of themselves being poor and go on leading dissolute and aimless lives. The parallel at societal level would be something like communism, which while it certainly didn't manage to eliminate poverty came darn close to eliminating the idea that one could be rich, which is... more or less the same thing? (today's middle class lives in far more material wealth than kings of centuries past etc.) and people under communism certainly lost a lot of their drive and creative vitality and initiative and their societies suffered from it.

So knowing poverty, being able to experience some of it firsthand (particularly in youth) or conceive one could fall into it, or seeing it around us daily, can maybe be useful (trying to avoid using the word "good" here) for individuals giving them more drive to succeed in life, keeping them real, not being wasteful etc.? and for society because it acts as a stick to make people try harder?.

This doesn't mean you can take it to the extreme - too much is clearly bad - PTSD from getting tortured gonna outweigh any benefits from being more careful avoiding danger so as not to suffer pain, and too much poverty when it affects nutrition of kids, structural integrity of family, and other basic needs is gonna be unequivocally bad.

Now how much poverty (or inequality, if you want to call it that) is still overall beneficial at the individual level i don't have the slightest clue beyond strong suspicion the threshold is higher at society vs. individual level.

for context if useful: have personally experienced communism and then moderate poverty, but not extreme poverty. Currently not poor.

Expand full comment

I think you're reaching here. Eliminating the possibility of riches is like the opposite of eliminating poverty, no?

Expand full comment

I think any argument about the virtue of poverty, like arguments about the virtue of suffering, is ultimately trying to prop up old Medieval concepts of the world being just because it's God's creation or the more modern relativist formulation of there being a good side to everything.

There is some truth to the idea that one must experience bad things in life to have sympathy for others; the Japanese language captures this best by having a word for "heartless" or "unfeeling" that literally means "no tragedy" (implying that those who have never felt pain themselves cannot empathize with the suffering of others- or alternatively that the only way to avoid sorrow in life is to feel nothing at all). This is both true and (in my belief) inevitable. However, the fact that virtue can arise from suffering and evil does not make the suffering and evil itself virtuous. I think the difference here is that pain is mere sense-experience, whereas suffering (or "stress", or whatever other word you want to use as a basket for "unpleasant and negative internal experiences") is a condition of the soul- whether you take that literally or as a metaphor for qualia and internal mental states that can't really be empirically measured. You can, in fact, divorce the sensation of pain from the suffering that arises from it and accept it as simply another sensation, but you can never divorce suffering from suffering, you can only reduce needless suffering, which I feel accurately describes the negative experiences of most of the impoverished.

For those who disagree- would you consent to live for an entire year, with no escape hatch, as one of the lower-class in your society in order to cultivate whatever virtue you believe poverty creates? Would you have a family member you care about live like that for a year? Do you believe that people as a whole should engage in some period of renunciation so they live like a poor man, regardless of social class? I feel like these sorts of thought experiments are more illuminating about views of how a sort of person lives.

Expand full comment

Twin studies actually do find large shared environmental effects on cognition, but only in children. For reasons not fully understood, the heritability of IQ starts out low and increases throughout childhood and adolescence, with shared environmental contributions fading out. This is called the Wilson effect.

I suspect that this is largely a matter of parents in some households giving their children informal early education that has some transfer to IQ tests, and that this effect is swamped by 13 years of public school, but I'm not an expert and I don't think there's a clear consensus on exactly what's happening here.

Expand full comment

> you have to figure out how to convert a multi-dimensional result (in this case, a squiggly line on a piece of paper) into a single number that you can do statistics to. This offers a lot of degrees of freedom, which researchers don't always use responsibly.

You actually don’t have to convert a multidimensional result into a single number you can do statistics to. Multivariate statistics is a thing — it’s a huge branch that deals with *exactly* this kind of scenario — and it’s amazingly painful how often this gets ignored, leading to bad outcomes.

Whenever anyone tries to do a “Multiple comparisons correction,” please remember to yell at them on behalf of all the crying statisticians who just want you to understand that the right choice here is to do a multivariate analysis.

Expand full comment

re Bounded Distrust -- I gather this is one of the cases where experts/authorities flat out lied to the public? But it's ok because it was caught quickly and talked about on Twitter?

Expand full comment
Jan 27, 2022·edited Jan 27, 2022

I did fMRI research back in 2010-2011; the state of the field was so bad it convinced me to avoid academia at all costs. My thesis advisor heavily encouraged me to do the coloured jelly beans thing.

Expand full comment

"Shared environmental effects on cognition are notoriously hard to find. Twin studies suggest they are rare."

I don't like this prior, because while twin studies suggest shared environment effects are rare, the Flynn effect suggests that shared environments are (or at least were) common.

I'm annoyed that there's this giant glaring contradiction, this huge "notice you are confused" moment, where high-powered twin studies nearly flat-out contradict high-powered Flynn effect studies... and yet the entire IQ community just shrugs and dismisses one set of studies without introspection.

Expand full comment

I am Greg Duncan and one of the authors of the PNAS study. I am speaking for myself. I was trained in economics and not neuroscience but participated in virtually all of the analyses of the EEG data and in writing up the results.

First off, I would urge everyone to actually read the paper and its appendix, both of which are freely available on the PNAS website, to see what evidence we present and the words we use to describe that evidence. Many of the issues raised in the original story could have been resolved with a careful reading of the study.

*Shared environmental effects make the paper suspicious*

Our analyses are based on random assignment of different economic environments (i.e., a high or low cash gift payment) to equivalent groups of families. So shared environmental effects are not a confound.

*Cognitive tests and not EEG are the most reasonable ways of measuring cognition*

The children participating in our research were 12 months old, precluding measurement using cognitive tests. We are measuring electrical activity and not cognition. Certain patterns of infant EEG have been found to correlate with later thinking and learning but we do not imply that EEG at 12 months is a measure of either thinking or learning. In our upcoming age-4 data collection we will be gathering both EEG and more conventional assessments of thinking and learning.

*Researchers do not always process EEG data responsibly*

Our procedures, including data processing, are explained in Troller-Renfree, S., Morales, S., Leach, S., Bowers, M., Debnath, R., Fifer, W., ... & Noble, K. (2021). Feasibility of Assessing Brain Activity Using Mobile, In-home Collection of Electroencephalography: Theory, Methods, and Analysis. Developmental Psychobiology, 2021. Code for data processing is available on GitHub (https://github.com/ChildDevLab)

*People love seeing visible EEG effects*

Figure 2 provides the picture, but statistical analysis (reported in Table SI6.1) are provided to support whatever differences might be observed.

*Replication studies often produce lower effect sizes*

We welcome replication studies.

*The study has enough yellow flag to warrant checking into it*

We welcome close scrutiny, especially among people who have given the paper and its supplemental materials a close reading.

*They conclude that financial support changes brainwave activity*

From the abstract: “Unconditional cash transfers may change brain activity…” We never claim that we have established a causal link.

*All differences lose statistical significance after adjustment for multiple comparisons*

Results for our preregistered hypotheses are featured in Table 2 of our paper, and we pointed to their p>.05 nature in several places in the paper, including our conclusion. Results for additional, non-preregistered, analyses are also presented in the paper and, especially, appendix. Some of these results are statistically significant, even after multiple testing adjustments. These supplemental analyses play a substantial role in the conclusions that we draw.

*The abstract sure does say “infants in the high-cash gift group showed more power in high-frequency bands*

It certainly does, but, in the abstract, the conclusion that we draw from these differences is not couched in causal effect language.

*Can we just say that regardless of stats, we can eyeball a significant difference here (in Figure 1)? Andrew Gelman says no.*

So do we. We state that Figure 1 describes our data but that a proper statistical testing is needed to assess whether the differences pass muster in a statistical test.

*The graph (Figure 1) proves nothing*

We agree.

*But this study basically shows no effect.*

Results in Table 2 for our preregistered hypotheses do not show p<.05 results – a fact that we clearly acknowledge is several places. Analyses of preregistered hypotheses are properly accorded a great deal of weight in reaching conclusions from a piece of research. Gelman’s analyses of our data focused exclusively on those preregistered hypotheses, as have almost all of the blog post reactions. Had we stopped our analysis with them, we probably would not have tried to publish them in such a high-prestige journal as PNAS.

But the paper goes on to present a great deal of supplementary analyses. Statisticians have a difficult time thinking systematically (i.e., statistically) about combining pre-registered and non-preregistered analyses and often choose to give 100% weight to the preregistered results. That is a perfectly reasonable stance — and is what classical statistics was designed to do.

For us, the appendix analysis of results by region (in SI6.1), coupled with the visual regional differences in Figure 2 and by results from the regional analyses in the past literature, led us to judge that there probably were real EEG-based power differences between the two groups. Our thinking, which is explained in the paper, was reinforced by our non-preregistered alternative approach of aggregating power across all higher-frequency power bands. This gets around the problem of the rather arbitrary (and, among neuroscientists, unresolved) definition of the borders of the three high-frequency power bands and eliminates the need for multiple adjustments. Results (in Table SI7.1) show an effect size of .25 standard deviations and a p-value of .02 for this aggregated power analysis.

As our cautious “may cause” language in the paper suggests, we are far from assigning 100% weight to the supplemental analyses, but our weighting of that information is definitely not zero. Our non-zero weighting of the non-preregistered results led us to our “weight of the evidence” conclusions from all of the data analyses we conducted, while Gelman has good reasons to believe in their interpretation of data that assigns a zero weight to anything other than the pre-registered results. I hope that everyone agrees that we will know a lot more with the age-4 data, but we and not Gelman believe that there is enough going on to bring the age-1 results to the attention of scientific and policy audiences.

*Stuart Richie says that this article was accepted under rules that give it an easier ride*

Two reviewers provided detailed comments on the first draft, which led to extensive revisions.

*Heath Henderson says that the paper was not preregistered for beta and it showed the biggest impact. This should raise red flags*

Not for someone who actually takes the time to read the paper. Section SI4 explains that, at the time of preregistration, beta was not preregistered owing to sparse evidence on its association with income. By the time we began our analysis, several papers had established an income correlation so our analysis plan was updated accordingly. Table SI4.1 shows that differences in results for multiple testing adjustments with and without the inclusion of beta are trivial.

*Julia Roher compares this study to an experimental study of foster care placement out of Romania and finds differences*

One of the authors on our study is a PI on the Romanian orphanage experiment. There are too many extreme differences in the environmental conditions and nature of the experimental treatment to begin to make comparisons between the studies.

*We can quibble about whether the study might be suggestive of effects…”

Yes, let’s quibble…or maybe consider all of the information more carefully. We do not claim an ironclad case for causal effects and, as suggested above, recognize that anyone wanting to consider only the pre-registered results could justifiably conclude that we cannot reject the null hypothesis. We, however, conclude that the weight of the evidence taken as a whole supports a possible causal connection.

Expand full comment

This is not your field, but this criticism seems to be pretty damning:

https://astralcodexten.substack.com/p/against-that-poverty-and-infant-eegs/comment/4696231

Expand full comment

As someone who was a professional fMRI researcher, yes you absolutely should have default suspicion. I hate the trend of psychologists slapping on an EEG or fMRI component to their paper to make it seem more technical and "biological", when there is a much-more-relevant behavioral outcome they should be studying.

Stick to using brain-imaging for what it can actually teach us about: the structure of the brain, and maybe a bit about how it computes stuff. (Also, I may be biased, but I think you should be 10X MORE suspicious of EEG. It's such an incredible noisy technique, and unfortunately has a really low activation cost to slap on your study. Anyone can buy one off the shelf and use it in a way that guarantees tons of weird artifacts.)

Expand full comment

Responding to Carson: everyone should doubt every research paper going into it, but then have an open mind about whether the paper (plus, in our case, the published EEG methods paper I reference in an earlier post) provides a convincing case in support of its conclusions. I would be very interested in hearing your judgement after you read the paper and its supporting materials. The lead authors (Noble, Fox) on the paper are distinguished neuroscientists specializing in these technique and young children and we were advised by a broader group of distinguished set of neuroscientists. Maybe we got it wrong, but I need for you to tell me why.

Expand full comment

"beg you to believe I would have come up with the same objections eventually."

hahaha...I feel ya; from me, anyway, you have the benefit of the doubt.

Expand full comment

This comment section is mostly dead, but just wanted to say I came across a reference to this study in the wild (class discussion of a different article that cites it at length) and got to say Hey! I know this study and they did dubious stuff with their p-values!

(and to her credit, the prof was like oh I didn't know that, I'll keep that in mind.)

Expand full comment