126 Comments

I wonder if someone has a historical trove of instructions to letter writers for tenure cases. When I write tenure letters these days, I am specifically asked to comment on the venue of the publications of the candidate, as well as the quality of the publications themselves. The idea is that prestige of venue is something that is more legible to a Dean or provost or regent while quality is only legible to disciplinary specialists. If this was a change in the 1970s, that would make sense for a rise in importance of prestige of journal, rather than just employer.

The 1970s would make sense as a time for this. I hear that the worst academic job markets before the 2008 financial crisis occurred in the 1970s, as baby boomers were completing their phds, but the universities were all full up with faculty of the previous generation hired to teach boomers as undergrads, so the boomers largely got shut out. (In philosophy, see: http://schwitzsplinters.blogspot.com/2011/12/baby-boom-philosophy-bust.html?m=1 )

Interestingly, my personal opinion of Nature and Science is that they’re prestigious places for people who publish really awful work in social sciences and philosophy, because they don’t know how to referee these things. The one paper I think of most that was published in Nature is a truly awful one by Karl Popper and David Miller in the 1980s where they publish their millionth denunciation of bayesianism, on the grounds that probabilistic support of H by E can be factored into the support E gives “H or E” (which is deductive) and the support E gives “H or not E” (which is negative). Thus, they conclude yet again that there is no thing as positive inductive support, and we should all be falsificationists.

Expand full comment

So there was a need in the 1970s due to that academic job market, a need for some additional scales to rank tenure applicants? And there the journals were to create, or mirror, that scale.

I’ve heard about that 1970s academic job market from boomers who dealt with it. I wonder if part of the motivation for university department expansions in the 90s was guilt and regret over what had happened twenty years earlier. Maybe also a desire to entrench and expand to recession-proof themselves.

That “H or E” argument reminds me of the trick where one can seem to count someone else’s fingers and get 11. Fortunately for Popper and Miller, using symbolic logic means they’re automatically correct!

Expand full comment

Expansion in the 90s was likely due to the Millennials entering college, as the first generation bigger than the Boomers.

Expand full comment

The millennials were born in the 70’s and early 80’s? Are we just calling everyone born in the last quarter of the 20th century millennials now?

Expand full comment

If you were born in 1980, you sometimes get counted as a Millenial. “Gen X” is usually 1965-1980, but lumping in people who graduated college and went to work without the internet with people who only vaguely remember life without email seems silly. I’ve also heard “X-ennial” for very late-70s and early-80s babies. My husband and I fit that bracket and he started college in the 90s.

Expand full comment

I suppose people dislike having generational divides that are less than 10 years, but considering how niche the internet was until the 2000's it seems crazy to me to lump in people born in the first half of the 80's with those born in the 90's. Most people wouldn't have home internet access till the 2000's; Statistica puts only 18% of homes having internet access in 1997, and 41% in 2000. (I recall people didn't start talking about millennials entering the workforce until ~2010 or so, which seems to sync up with that as well.)

I think pushing millennials back to the 70's really misses the sharp cultural change that was happening at the time, with those born in the 90's experiencing a very different world than those born just 10 years before. I can see making a break with those who graduated college before the internet was relevant at all, but there is a definitely a group in between them and millennials, a group who largely grew up without internet and mostly didn't even own computers in the home. Maybe just a 10-15 year generation that spanned the gap between aerial antenna tv and rotary phones at one end and the internet on the other.

Any farther back and you get to the point where Millennials are responsible for making the internet we know today, and that's... a bit much.

Expand full comment

Internet access was not niche, it was prevalent among the upper-middle class, i.e. the people who actually matter, as well as at school.

I was born in 1985 and my family always had the Internet, and while we were early adopters, all of my high school friends had home Internet access, and all the schools had internet access from like 1993 or 1994 at least. So anyone born after 1980 would have Internet access by high school through school at the very least.

Expand full comment

I was born in 1980 and think of myself as a millennial, so that's what I was thinking.

But I guess the relevant thing is that the number of people born in the US reached a low in 1973-1975 and then increased basically every year until a peak in 1990. Thus, the 1990s and early 2000s were a period of growth in universities.

https://www.infoplease.com/us/population/live-births-and-birth-rates-year

Expand full comment
May 21, 2022·edited May 21, 2022

That’s interesting about a hiring bust in the 70s. I wonder if it’s also connected to massive hiring in the 60s. I know loads of people in philosophy who were hired in the late 60s and it sounds like a walk in the park. I always thought it was because more young men were going to college (in the US) to avoid the Vietnam draft, thus increasing demand for faculty. By the 70s that need would have been sated and Vietnam was ending.

Expand full comment

What % of American students are born in America?

Here in Canada almost 20% of university students are foreigners, so our student numbers are fairly divorced from our population decline.

Expand full comment

I have no business trying to make this argument, it’s been too long, but if I don’t write it out I’ll be up all night thinking about it.

The problem with that assertion against bayesianism is that H and E are assumed to be unrelated phenomena… if there’s overlap, then the support E gives “H or E” (H U E) would be deductive, and the support E gives “H or not E” is not negative, because H U not-E will contain the bit of H which is also in E.

Bastards… I’ve never sat down to learn any of the structure of Bayesian logic but maybe this summer I will have time.

Expand full comment

P(H or not E | E) < P(H or not E) is a provable theorem. That’s what I mean by that support being negative.

Expand full comment

Falsificationism is a much better philosophy than rationalism tho

Expand full comment

I’ll be the one to ask. What is falsificationism better for?

Expand full comment

Finding true facts.

Expand full comment

It's much less messy. ;-)

Expand full comment

Because it is the only way to actually test your beliefs versus reality. It doesn't matter how sound your logic is if the base assumptions are false.

Expand full comment

I was wondering something similar, whether that transition in the 70's met a "need" for a more legible measure of an academic's performance. I was thinking it might be due to a large expansion of post-secondary education (due to more women going to college?), thus, a large expansion in the number of faculty? Having a hard time finding data doe this hypothesis... there certainly seems to be pretty rapid growth the latter half of the 60's: https://www.statista.com/statistics/183995/us-college-enrollment-and-projections-in-public-and-private-institutions/

Of course, the nature of science (ahem) changed completely after WWII, becoming heavily institutionalized and government funded. Perhaps the desire to evaluate academic quality, resulting in the creation of prestige publications, came from the funding agencies...

Expand full comment

I got curious and googled Popper & Miller's ... and here is something interesting: if you are talking Popper & Miller, 1987, "Why probabilistic support is not inductive", it was published in Philosophical Transactions of Royal Society, Series A.

Expand full comment

Correction to correction: There is a Nature paper "A proof of the impossibility of inductive probability" from 1983. Google search ranks it lower than 1987 one.

Expand full comment

Here's what I think happened about a half century ago to make "Nature" and "Science" more prestigious: they probably hired PR people to work with the New York Times and other important media outlets to publicize their upcoming articles.

Today, when there is a big paper in the Nature or Science, the New York Times science section often has a high quality write-up simultaneously. I presume the NYT was alerted ahead of time about what was in the pipeline at Nature or Science and cajoled to feature various upcoming papers, which the NYT selects among.

As evidence, NYT articles about big stories in Nature or Science almost always feature a quote from an eminent scientist who didn't work on the paper but has read it and says it's important. That's the kind of thing that takes awhile to work out, so I presume that Nature and Science were working with major media outlets to make this happen for some time.

It's kind of like how movies hit the theaters on Friday, but the newspapers always run reviews in the wee hours of Friday morning. How do they do that? Critics are invited to see screenings, often on the preceding Tuesday. (This is the opposite, by the way, of opening nights on Broadway, which get reviewed in the next morning's newspapers. The cast traditionally stays up all night to read the reviews to know whether they'll have a job for the next year or if they need to start looking for other work.)

In the past, however, there wasn't this much effort at coordination. For instance, the most famous paper in the history of "Nature," Crick & Watson's structure of DNA article, appeared in Nature (in London) on April 25, 1953. But the NYT didn't carry the story until June 13, 1953:

https://www.nytimes.com/1953/06/13/archives/clue-to-chemistry-of-heredity-found-american-and-briton-report.html?searchResultPosition=6

Expand full comment

Nah. Nobody in the business gives a shit what the NYT says about anything scientific. It's almost always embarassingly stupid. Pop science to scientists is like fingernails on the blackboard. I can only read popular articles on subjects about which I know next to nothing professionally, otherwise it's just too painful.

Expand full comment
May 22, 2022·edited May 22, 2022

Your claim seems to me to be more reflexive contrarian bias than rational.

It does not seem that plausible that the most subscribed and highest status newspaper in the world is largely considered irrelevant and "embarrassingly stupid". And in fact the statistics would seem to point in the direct opposite direction of your claim: of 24 news outlets, reading the NYT is 4th highest correlated with level of education [1]. And there is a well established correlation between a scientific article receiving media coverage and subsequent citations [2].

That said, I agree with you that popular science writing is not awesome. But you are confusing your personal perspective with the a claim that getting an article published about your work in the NYT is bad for every purpose and from every perspective.

[1] https://www.pewresearch.org/politics/2012/09/27/section-4-demographics-and-political-views-of-news-audiences/

[2] https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0234912

Expand full comment

All very interesting speculation, and if I didn't actually personally know a ton of scientists, and if I weren't therefore speaking on the basis of empirical observation, I might find this theory worth the effort to refute.

Expand full comment

Other newspapers also carry stories simultaneously with Nature's publication, so I presume they send out a general press release.

Expand full comment

Sure. They work with the Associated Press and Reuters too.

But I bet the big change a half century ago that this book reviewer wonders about was that Nature and Science started devoting more resources to cultivating influential news media to cover their articles.

Similarly, I bet you could figure out when research universities started hiring lots of publicists to write up background text on their professors' work. The quality of stuff coming out of university PR departments these days is quite good -- highly readable prose that the professors being publicized have checked over to make sure there are no mistakes.

Expand full comment

Would you agree that your second paragraph describes a case of Turchin's elite overproduction?

Expand full comment

I don’t recall Turchin’s full theory. This is a case where too many people were trained for a particular type of position. But on this story, it’s not a systematic thing - just a byproduct of a large birth cohort being followed by a small one, with employment prospects in this field for one cohort being tied to the population of the cohort a generation younger.

The fact that the academic job market has always been difficult since then (though not as difficult in the years between about 1980-2007) is more relevant to the Turchin thesis I would think.

Expand full comment

[Meta] Could we (somehow) use a Manifold market instead of voting to determine the winner?

Expand full comment

What benefit would that have?

Expand full comment

It just feels like it should be possible to improve on voting for assessing the community's consensus on what the best book review is. For example, maybe your vote should be weighted by how many of the reviews you read, or by how much other people trust your opinion, etc. Plus of course getting all your friends to vote for you hurts the meaningfulness of the results.

Expand full comment

On the other hand, a decent number of people in the SSC community are more skeptical of prediction markets than Scott, or don't want to use them for other reasons. Restricting voting to people who are as into prediction markets as Scott and (apparently) you, will bias the result in term of how the sum total of the SSC readerships likes the reviews.

Expand full comment
May 22, 2022·edited May 22, 2022

What benefit that would have?

Manifold market about vote outcome would make sense. But I don't believe any prediction market will do very well on tasks where they have not externally imposed outcome or some other signal.

Market economies work because individual participants vote with their wallets and buy goods and services (price signal). Sometimes if will take some time for the reality to impose itself on the markets, though, and until that happens market can have speculators who react only to valuation by other speculators.

Prediction markets need something similar as genuine price signal, too, or otherwise they become self-referential tulip bubble producing "markets" (and/or possibly a way for prediction market nerds to become parted with their money.)

Suppose after the election period, Scott would take the reviews down and publish best of them as paid tier content on Substack. It would make some sense to use prediction market to predict which ones would attract most subscribers and declare them as winners.

Expand full comment

Further to the self-reinforcing bubble effect, a Manifold market for "who will win this Manifold market" also feels like a Keynesian beauty contest. People vote for "what is popular" instead of "what is good". (https://en.wikipedia.org/wiki/Keynesian_beauty_contest)

Expand full comment

Thanks! I think I was trying to describe Keynesian beauty contest but had forgotten the name for it.

Expand full comment

> In other words (and getting rid of the old-fashioned capitalization of random adjectives and nouns)

Why did people stop using capitalization for emphasis? If you capitalize according to specific rules, the capitalization is redundant. Only arbitrary capitalization expresses something. Consider for example "Scientific Work", "Scientific Discovery" and "Scientific men" - "Work" and "Discovery" are capitalized but "men" isn't - that suggests a focus on results over individuals. Or consider "Daily Life". The usefulness for "Daily Life" is not treated as something shameful, something that makes science less pure, but a matter of Pride.

Maybe improvements in typography and printing made other kinds of emphasis possible - bold, italic, etc.. Capitalization still is used for emphasis on platforms that allow only plaintext.

Expand full comment
deletedMay 21, 2022·edited May 21, 2022
Comment deleted
Expand full comment

> Releife of aged impotent and poore people

Where would I find such a charity (asking for a friend)?

Expand full comment

> If you capitalize according to specific rules, the capitalization is redundant. Only arbitrary capitalization expresses something.

You seem to be presuming that the only sort of information that can or should be expressed by capitalization is emphasis. With respect, that seems obviously untrue. For example, consider the rule that “the names of countries, and words derived therefrom, must be capitalized.” Capitalization according to that rule allows us to determine the *very meaning* of certain words in text—think about “Polish” vs. “polish,” etc.

Of course, one could argue that maybe English shouldn’t have any words whose meaning changes with capitalization. But it does, and so it’s just not true to say that non-elective capitalization—capitalization done because of the rules of the language, rather than the choice of the writer—is in some way “redundant” or contentless.

Expand full comment

It also makes things easier to read/parse, for me; paragraph or line breaks don't express much information, but it would be hard to deal without them.

Expand full comment

This is the main reason imo, ease of reading. If you want emphasis, put the word in italics or even bold it. Only capitalize it for emphasis if you're aiming for a sort of ironic self-important tone that Scott sometimes does.

Expand full comment

Paragraphs do express the structure of the text. Unlike capitalization, they don't follow simple rules.

Expand full comment
May 21, 2022·edited May 21, 2022

That's why I said they don't express *much* information — the meaning of a text isn't changed, only maybe the places you put a pause when reading.

But perhaps spaces between words are a better example.

Expand full comment

Probably when communication by speaking became significantly replaced by communication by (silent) reading. In the 1850s it was till pretty common for people to absorb even quite long communications by reading it aloud, and listening to it. My impression is that up until the 20th century it was fairly common to add in typographical clues (e.g. capitalization of certain nouns) that would aid in reading aloud. In this case, one is probably expected to pause and lay stress on the capitalized nouns. Not because they represent more important ideas than some other ideas that could be expressed by the same nouns, but because *in that particular sentence* the meaning of the sentence is better understood if, when speaking it, you lay stress on those words.

Expand full comment

An article by Jon Lackman in Slate on why the Declaration of Independence has so many but so irregularly capitalized words: e.g., "When in the Course of human events ...

"In the century prior to 1765, nouns were generally capitalized. (The reason for this is now obscure; Benjamin Franklin hypothesized that earlier writers “imitated our Mother Tongue, the German.”) By the Revolutionary War era, however, chaos was the rule. Everyone, it seems, had a different style, and individual authors vacillated from one sentence to the next. ..."

My guess would be that lots of capitals look heavy-handed, which was out of fashion during the Enlightenment (which is capitalized, of course.)

"Other founders, including Jefferson and Madison, dropped caps with reckless abandon."

But the scribes who wrote up the formal version from Jefferson's draft liked a lot of capitals.

"... In America, Franklin attributed the change to printers who felt that light capitalization “shows the Character to greater Advantage; those Letters prominent above the line disturbing its even regular Appearance.” Author Thomas Dyche wrote in 1707 that capitalizing all nouns is “unnecessary, and hinders that remarkable Distinction intended by the Capitals.” As you can see, Dyche couldn’t convince his publishing house to change its ways, even for his own book."

Eventually, heavy capitalization faded in the English-speaking world. But then German philosophy became fashionable from the 1830s with people like Coleridge and Emerson. Some German concepts sounded more profound in German so they were often left untranslated. The Germans still capitalized each noun and American students tended to bring over the capital with the noun: e.g., "Zeitgeist" rather than "zeitgeist" for "spirit of the age."

My vague recollection is that 50 years ago, William F. Buckley enjoyed satirizing the pompous spirit of his age by spelling it "the Zeitgeist." I carry on that affectation when I write about "the Woke" although I doubt if many readers get my obscure humor.

Expand full comment

Capital letters are wildly inefficient. Having a whole additional alphabet of 26 more letters (in effect) is an absurd level of complexity for the extremely marginal benefit we get from it.

Expand full comment

And here I am wishing text figures (aka lower-case numbers) were used far more often than they currently are.

Expand full comment
May 21, 2022·edited May 21, 2022

Hey now, I need those for variables! We even use Greek, and in a couple of specific places Hebrew and Cyrillic (א and Ш, to be specific). Still not enough sometimes...

Expand full comment

Efficiency is the hobgoblin of small minds. Many of the best things in life are "inefficient"- as are many of the things that make life worth living.

Expand full comment

Well, the original idea was to invent letters that could be more easily written quickly with a quill pen than the letters that one incised on stone.

Expand full comment
May 20, 2022·edited May 20, 2022

>Why doesn’t Making Nature talk about this? One possibility is that the Guardian article is mistaken or exaggerated. Surprisingly, this was difficult to fact-check: I googled around and didn’t really find any other references to Cell having had such a transformative effect on scientific publishing. It could mean that the real effect wasn’t that dramatic — or it could mean that Cell’s impact has been overlooked. I’m tempted to believe the latter, since I otherwise don’t know of a good explanation for Cell’s considerable prestige.

The Guardian's article about Cell is correct, it really was founded with the goal of being exclusive and prestigious. I don't think Cell can be formally proven to be the cause of increasing selectivity across publishing as a whole, but it certainly seems plausible.

Expand full comment

Whether it was the cause of a trend, or one instance of the trend that had already started, the important thing for this particular discussion is just that there was such a trend at that time.

Expand full comment

"I grew curious about this when I realized that most researchers treat journal prestige as a given. Everyone knows that Nature and Science matter enormously, yet few would be able to say why exactly."

We should be curious indeed when scientists take an obviously non-scientific criterion such as prestige "as a given." While the review delves into *how* Nature came to acquire its prestige, perhaps the deeper question is *why and whether* prestige should play the role it does in institutional science. This question cannot be adequately answered in a merely sociological or historical way.

Expand full comment

In an ideal world, you're absolutely right that prestige should play no part in science at all! Only objective truth/reality/data should matter!

In the real world, scientists are human and hence have limited time, energy, and attention span, all of which are completely overwhelmed by the amount of research being published. If I carve out two hours out of my day to read the latest research in my field, what is going to provide me more interesting/useful/potentially groundbreaking information - reading the latest paper in my field in Nature or in the International Proceedings of Molecular Cancer Immunotherapy*?

If you answered "Nature," congratulations, you win.

THIS is why prestige matters - not because scientists are status-obsessed (although some of us are, to be sure), but because it's a useful heuristic for "this paper is worth spending my extremely limited time/mental energy/spoons on."

*This is a made-up title. In general, the longer the title, the lower the prestige. "Whoville" = highly prestigious; "International Annals of Grinch Experimental Cardiology" = dinky journal that no one reads.

Expand full comment

There's no question that one must engage in information filtering, and that effective filtering will, in part, involve heuristics.

The question is whether 'prestige' is, in fact, a good heuristic. I'm open to the argument that, prior to the rapid expansion of institutional research funded by the federal government in the decades following World War II, prestige did correlate reasonably well with quality. I would argue that, in the subsequent decades, it has come to correlate less and less well.

Expand full comment

What might a better heuristic (that works at the scale we're talking about) look like?

Expand full comment

I think we first need to step back and ask whether the large-scale institutionalization of research funded by the federal government is a system that can be made to work well or whether it necessarily embodies incentives counter to those of science itself.

Expand full comment

In the real world, scientists in several fields, if they set aside time to keep up with what is being published, often ignore Nature and Science and read the announcements of what's new on arXiv. Those two magazines are certainly beloved of deans, funding agencies, and promotions panels (especially if biologists are members) but physicists, mathematicians, and AI researchers look at arXiv to stay current.

Expand full comment
May 22, 2022·edited May 22, 2022

Individual prestige and status is important motivator for scientists. Less sure about the journal prestige.

Our university library gets Nature and Science, in print. They are prominently displayed in the reading room.

I have never seen anyone to read them, besides myself. No matter what time or day, I am free to pick the magazine and peruse it on my leisure.

Before you ask, the daily newspapers are always taken, it seems impossible to obtain today's paper unless one is willing to wait. (And in today's academia, there is actually also genuine work to do, one cannot have indefinitely long coffee breaks.)

Expand full comment
May 30, 2022·edited May 30, 2022

huh, IME it's the opposite. Anything in Nature is a year-old gloss lacking crucial details, if you need to stay current professionally one scans arXiv for relevant papers, and if you need to replicate an experiment it's Phys Review or Phys Letters that has the required level of detail. (There are multiples of each of those, sorted by subfield; I think all the stuff in my field is in Phys Review D? I didn't have to worry about it much as a PhD student, though, and decided not to become an academic)

Editing to add: I guess this doesn't argue against "prestige" more generally, but does stress that narrow domain specific status is more important than generalist prestige if you want to do good science

Expand full comment

Better prestige than money, surely?

Expand full comment

Are we reduced to choosing between obviously non-scientific criteria to guide science? That would suggest we need to evaluate more critically the role of science.

Expand full comment
May 20, 2022·edited May 21, 2022

I'd never heard of Cell when I first came across a paper related to my work in it. The quality of research was just a step above anything I'd seen in domain specific journals. It's the same thing with Nature and Science. They reign supreme because that's where anything worth reading is published. Many pharma companies won't even research a lead compound if the discoverers didn't publish in a CNS journal. There's too high of a risk that the original research was done badly and it won't replicate.

EDIT: I agree with both commenters that I'm being overly harsh. There are some good domain specific journals. There are just A LOT of bad ones.

Expand full comment

"anything worth reading" is overly harsh, IMO. There is certainly worthy research being published in non-CNS journals. The ratio of manuscripts submitted to manuscripts accepted at CNS is so high, they simply cannot publish all good research. It's like looking at Google receiving 1000 applications for one position and thinking, "The 999 applicants who didn't get the job must have sucked!"

Expand full comment

The current reputation of Nature and Science is prioritizing 'sexiness' of results above scientific rigor (Nature has a subjournal - Sci Reports - focused on rigor above impact). I've seen people in both Chem and Bio note that they find results in 'medium' field-specific publications - like JACS or J Neuroscience - and even field-specific subjournals of CNS - Nature Physics, Molecular Cell etc. - to be much more reliable than papers in Science or Nature proper.

Expand full comment

JACS is way more useful than Nature. I'm kind of shocked to hear you call it "medium" though. Never heard that before.

Expand full comment

Oh, I (and the people I picked it up from) use 'medium' for anything that is prominent, but still field specific (to distinguish it from CNS or PNAS). So JACS and even Nature Physics and Neuron would fall into this category. Just different standards for the word, I guess, and referring more to the audience size.

Cell is also technically field specific, but the audience is huge. And I think most biochemists would prefer to publish in Cell over JACS if the work fits both, which leaves JACS with a more limited scope.

Expand full comment

This is to misinterpret the role of Nature and Science. As the saying goes: if you're not making a few mistakes, you're not being ambitious enough.

The role of CNS is not to represent authoritative truth, it is to represent a best consensus as to the most interesting ideas. Of course some of those ideas will be wrong.

If you are not operating in the space where "interesting ideas" matter, but rather in the space where only "validated results" matter, you won't see the value to this.

Expand full comment

I'd say that in my field (climate science), Nature and Science papers generally have flashy results but are often actually less rigorous than papers in "normal" journals, and the percentage that are fundamentally flawed or that don't really show what they claim to show is probably in the double digits (for both kinds of journals). So I'd guess the relative quality may vary by subject, depending on how well the CNS journals handle reviewing and care about actually rejecting bad research.

Expand full comment

Darwin's book (perhaps one of the ten most influential books ever published) was called "On the Origin of Species".

Expand full comment

Some thoughts as a mathematician:

Pure math has a prestige hierarchy that is somewhat independent of the rest of science, but very comparable. Top math papers are never published in Science, Nature, or (obviously) Cell. My understanding is both that they wouldn't be accepted and that mathematicians wouldn't want to be published there (well, it would be good for tenure, but not for social reasons).

But math has its own set of the top most prestigious journals, which consists of Annals of Mathematics, Journal of the American Mathematics Society, and Inventiones Mathematicae, plus or minus two depending on who you ask. They were established in 1874, 1988, and 1966 (so the 19th century or roughly around the 1970s).

Making a journal more prestigious by being more selective, or creating a new very selective and prestigious journal, is something people try to do, with some success. The key is that you need people to submit enough good papers - just rejecting all papers is neither a sound business model for the publisher nor going to lead to any prestige. This is usually done by getting top people to serve as editors and asking them to ask their friends to submit papers. Having top people as editors sometimes helps to draw submissions all on its own, as does having some kind of positive mission such as a commitment to diamond open access publishing.

Despite the fact that selectivity is generally seen as the key to prestige, some journals maintain large backlogs, which often leads to editors asking reviewers to be very selective. I'm not sure what happens but I guess that reviewers, biased towards their field and with an accurate view of the journal's past quality, don't raise their standards high enough and editors have a hard time going against the reviewers.

The current system where you put your papers on arXiv so people see it and then publish it a journal years later so tenure and grant committees will know how prestigious feels so natural that I have no idea how it came about. It's interesting that there was a similar-ish system in the 1800s, but Nature and Science have switched roles!

I would guess that, what happened in the 1970s is not the specific influence of Cell but rather broader social trends created an opportunity for journals to ride the selectivity => prestige => more submissions train which Cell took advantage of. I would guess the increasing size of academia made it harder to evaluate people based on personal knowledge, leading to greater pressure to use journal prestige as a key factor, and then the growth slowed making the job market more competitive.

In general a lot of facets of society seem to have gotten more competitive, with people doing more and better-quality (at least as legible to an outsider) work to achieve the same jobs.

Expand full comment

Definately more years schooling required. Is PostDoc a new thing?

Expand full comment
May 21, 2022·edited May 21, 2022

Interesting review, I didn't know much about the background of Cell!

On the impact of the web: another journal perhaps worth including in this discussion is eLife, which attempts to reimagine the prestigious journal in the age of web browsers and preprint servers. It may not be well-known by the public yet, but it is taken seriously.

Expand full comment

Suppose that you wanted to create a new scientific journal. For example, I think that there are some Seeds of Science people around here.

What should be done to make this new journal successful?

Expand full comment

In what sense? making a new arXiv for a field that doesn't yet have one is quite distinct form making a journal that university hiring boards respect.

Expand full comment

This is a very interesting review, and pretty well written, so well done Mr/Ms Author.

I would agree the prestige matters a lot more now than it even did at the start of my scientific career (in the 80s). In those days, it was *starting* to matter where you published, but it still wasn't all that important, so long as you picked a "respectable" journal. You could make different choices depending to some extent on the community that you knew read each journal, and who you particularly wanted to reach. But, yes, one's reputation depended a bit more on stuff like how many people cited you, or invited talks, and less on which journal your articles appeared in. That slowly and significantly changed.

Why so? I agree this is a $50,000 question, and a lot of people have wondered about it. I'll give my own WAG based on nothing in particular other than been around a while.

It's valuable to remember that *before* the Second World War, being a "scientist" wasn't an especially prestigious (or well-paying) career. It was sort of like being a painter, something certain people had a fetish for doing, and what happened in the field was quite interesting to others with similar perversions, but not so much outside of that area. Science did not pay much per se -- most people were hired by universities to teach, and did science kind of on the side, because they wanted to, but it didn't pay, and nobody *aside* from other scientists really cared whether you did it or not. There was no such thing as the government research grant, the securing of which occupies a shocking amount of the modern scientist's time.

The war totally changed all that. It became manifestly clear to all governments that scientific research could pay off big time -- could produce new technology for fighting wars, typically, but also technology that could establish dominance in civilian areas as well. I'm not sure why the war catalyzed the latter as well as the former, but it did. Before the war, technical innovation was something that arose from private efforts and private means. After the war, it was considered an appropriate field of endeavour for government -- government was *expected* to encourage and support technological innovation. The NSF was founded in 1950, NASA in 1958, DARPA in 1958, the NIH was authorized to support research with money in 1944, and so on.

Concomitantly with this there came a giant surge in the public visibilty and prestige of science. Science built the atom bomb. Science gave us rocketships, and lasers and radar, and computers, and penicillin. Wow! The visibility of basic research and the belief that it had a strong impact on everyday life rose steeply in a way that is probably historically unprecedented[1].

Initially this was probably just gravy for those who were in a position to benefit -- those who were already scientists in the 1940s and 1950s, who had grown up in the old system. No doubt the surge in prestige and money encouraged many people who were already educated (at the undergraduate level) to go further in their education, get PhDs, and get into science -- as it was meant to. But I think for a long time the supply did not grow as fast as the demand, in part because it just takes a long time to mint a new scientist. Someone has to go to college, major in science, then go on to a graduate education. So, starting from an interest at age 15-18, even when a post-doc wasn't de rigeur it still takes ~10-15 years.

However, what *also* tremendously increased post-war was an encouragement by the government for young people to go to college. College enrollments just exploded after the war, thanks to the GI Bill and a steady stream of government encouragement (financial and otherwise) since then. Going to college in 1935 was, again, something you did if you had a yen or were part of some social elite that needed to make contacts within your social class. It wasn't seen as a big career skills foundry until after the war.

Eventually the supply *did* catch up to the demand, and (as is the nature of things) overshot. More and more young people went to college, more of them majored in science, and more of them wanted to be scientists. But as the supply caught up to the demand, supply-side (labor) competition emerged, not surprisingly. Ordinarily this would just result in a decline in wage, but science is unique, in that the salary of the scientist is actually not the biggest component of the labor cost. It's more the cost of the grants he controls, and that he occupies one of a relatively few positions at a prestigious university, and neither of these things can readily change. So in this strange non-free labor market, the result of competition was supply rationing: a greater number of would-be scientists competed for a relatively unchanging number of faculty positions at prestigious universities, and for a fairly slow-growing number of research grants.

Once that competition becomes established, and fierce, then prestige starts to matter more and more. People start scrutinizing the quality of the research you produce, and making judgments, and the prestige of the journal is just one more factor to assess those things.

The timing fits: if the demand accelerated circa 1950, and it took 10-15 years for the supply pipeline to start delivering more, the supply would have started surging in the 1960-65 timeframe, and by 1970-75 competition would start becoming a big factor in the scientific career.

--------------

[1] Parenthetically, it's interesting that the similar surge in visibility and impact of networked computing in the 90s through the present has come mostly from private sources, significantly undercutting the post-war proposition that government can usefully catalyze technological progress. I would not be surprised if there has been a slow decline in the prestige of traditional science for just that reason -- and this may have something to do with the fairly surprising level of contempt shown towards (particularly government-supported) science during the COVID pandemic. Yes, mistakes were made -- but in a previous age they would've been glossed over.

Expand full comment

Yes this account makes sense to me. It seems like the prestige economy is a by-product of professionalisation. In the amateur age, experienced and good scientists would write fairly professional-looking papers; inexperienced and not-so-good scientists would write papers that looked a mess. If you saw a paper that looked professional, it was probably good. The problem was picking the good papers out of the messy hand-written heap, and finding the next genius or next generation of decent scientists.

Following professionalisation, many people were taught to write papers that look good, and the problem for the reader of science was reversed: among all of those well-formatted papers, only a small number had really high-quality content, and for anyone who wasn't part of the field, it was tough to tell which they were (also for people who were in the field, as replication crises have shown us).

Hence, prestige measures grow up as an imperfect response to the problem.

Expand full comment

As I understand it, government-backed and private R&D have particular strengths and weaknesses which make them suited to different niches. Government funding is indispensable at the early stages, when there's no clear path to profitability yet, which is why pretty much every technology that the 90s through the present networked computing is based on was initially developed at the taxpayers expense. However, as soon as the stuff becomes commercially viable, the lumbering state is quickly overtaken by the nimble entrepreneur.

Expand full comment

That would be a lot more persuasive if the transistor, laser, automobile, airplane, telephone, electric light bulb, nylon, or integrated circuit, to pick a few revolutionary ideas out of the air, had been invented with government money.

Expand full comment
May 21, 2022·edited May 21, 2022

These journals are much like universities. It's extremely difficult to build prestiege with new journals/universities, meaning that those with entrenched prestige can do all kinds of stuff (e.g. Nature embracing anti-scientific woke ideology, the well documented nonsense that takes place on campuses these days) and it doesn't matter because nobody can possibly compete. This is a real problem (if you care about truth rather than ideology).

Expand full comment

Everybody cares about both, of course. Nobody has truth as the only terminal value, so there are always compromises, and the stronger the ideology, the greater are the compromises that it enforces.

Expand full comment

This review feels like the beginning of a much larger conversation I didn’t know I needed to know about. I don’t really want to read the book but I do want to read more by this writer.

Expand full comment

I agree.

Expand full comment

Something similar for me. Not too interested in Nature or Science per se, but I thoroughly enjoyed the writer's writing. Polished, thoughtful and very well-written.

Expand full comment

> Leaving aside Cell, a more specialized biology journal that seems to have gotten into the CNS acronym the same way Netflix got into the FAANG acronym

I don't understand this. It sounds like it's supposed to be a slam against Cell, but the way Netflix got into the FAANG acronym was by paying comparable salaries to everyone else in there. The analogue in journals would be that publishing in Cell provides comparable prestige to publishing in Science. How is that a slam?

Expand full comment

Are you sure that median salary is the main reason why Netflix is a component of the FAANG acronym?

Noah Smith recently claimed the following (https://noahpinion.substack.com/p/are-tech-workers-going-to-be-paid), without identifying which companies he is referring to:

> ...medium-big [tech] companies often actually pay more [than FAANG], to make up for the greater risk and lower pedigree of working there.

Are these well-paying medium-big tech companies missing from FAANG? Or is it that FAANG has more to do with a combination of size and glamour?

I have never thought about this before. The review has inspired me to wonder more about the sources of prestige.

Expand full comment

I am sure with complete certainty that the reason Netflix is a component of the acronym is that they are known for their top-of-the-entire-profession salaries, yes.

Whether they actually pay top-of-the-profession salaries is a different question, but there's no mystery as to why they're listed that way.

Expand full comment

Jim Cramer, who is credited with inventing the term, based grouping these sticks together on "technical analysis", ie what a chart of their stock market price looks like:

https://www.cnbc.com/id/100436754

Expand full comment

It’s always seemed to me like the primary reason that Netflix was included in FAANG is that the acronym sounds very different without the N.

Expand full comment

It would have been much more fun to say if it were GAFA.

Expand full comment

It’s seemed to me that Microsoft fits with the other four in that algorithm in being a tech giant that is involved in many of the same product categories. Netflix feels much more like Uber or Airbnb as a dominant single category company. Are Netflix salaries very different from Uber, Airbnb, and Microsoft?

Expand full comment

I think the coining of the term coincided with the tail end of Microsoft's lost decade as well as netflix pushing the technical limits of how much data it could stream. As such, the former was off the radar and the latter was investing a lot in tech R&D.

Expand full comment

My vague impression is that articles in Nature and Science are somewhat less trustworthy than in mid-tier scientific journals because Nature and Science like publishing articles that make Big News. But when you publish a lot of "Who Knew?" breakthrough articles, some of them will turn out to be kinda not true.

Expand full comment

Do you think that has always been true for Nature and Science, or something true after only some year? If the latter, what year would you estimate?

Expand full comment

Thanks for asking, but I couldn't say. I suspect I'm mostly parroting the opinion of a very well informed scientist friend, but I can't remember who or when he said it. Maybe the late AI pioneer John McCarthy? Somebody like that, but unfortunately who I can't remember.

Columbia statistician Andrew Gelman might have an opinion on your question.

Expand full comment

Thanks.

In the newspaper business, the ardent pursuit of hot scoops can lead to both more Pulitzers and more fiascos (and these days, both with the same story). Same in the scientific journal business.

Expand full comment

I wonder whether it depends on the field. In social sciences and humanities this is absolutely true. But I suspect that in microbiology, these journals might actually be the place where the best papers are published.

Expand full comment

Right. So don't listen to my impressions and assume they also apply to hard science topics. Those fields are over my head.

Expand full comment

Also, Science and Nature publish a surprising amount of not very high quality opinion journalism on Woke topics by lightweight journalists. For example, from Nature:

"BOOKS AND ARTS

"23 July 2019

"Sports and IQ: the persistence of race ‘science’ in competition

"Angela Saini assesses a book examining how bad science lingers."

"... His section on the success of Kenyan marathon runners in global contests is brilliant: it demolishes the idea of genetic explanations for any region’s sporting achievements. Some have speculated that Kenyans might have, on average, longer, thinner legs than other people, or differences in heart and muscle function. Evans notes, however, that we don’t make such generalizations about white British athletes when they do disproportionately well in global athletics. Such claims for athletic prowess are lazy biological essentialism, heavily doped with racism."

https://www.nature.com/articles/d41586-019-02244-w?utm_source=twitter&utm_medium=social&utm_content=organic&utm_campaign=NGMT_2_JNC_Nature

Expand full comment

In fifty years' time, people will be writing about the origins of The Astral Review's utter dominence of the book review journal industry.

If Scott plays his cards right.

I'm on to you, Scott.

Expand full comment

This is a superb review: or perhaps I should say superb *essay*, since it mostly felt like an essay on the topic, using the book as the primary but definitely not the only source, rather than a review per se. (Of course, the "essay masquerading as a review" is itself a well-established type of review, including the ones Scott does, so that's hardly a criticism).

But this was extremely interesting. An early front-runner for the top slot, and one that will be hard to beat. Chapeaux!

Expand full comment

The first error that jumps out at me: twitter doesn't belong in the discussion at all. It's a deep layer of hell, not any sort of useful information source.

Expand full comment

I agree that Twitter is in many ways terrible, and that the institution of science would be better if its public square were happening somewhere else, but empirically Twitter is undeniably where a lot of public scientific discourse is happening. For instance, it's the primary channel by which criticisms of bad papers that fail to replicate go viral.

You might not like it, and I don't either, but descriptively, Twitter absolutely does belong in the discussion. Trying to describe the landscape of scientific discourse without mentioning Twitter would be willfully blind.

Expand full comment

First if all, twitter is not a "public square." No virtual space can be. A public square requires certain features that only physical reality can provide, and it's amusing to see so many people deceive themselves by believing that the public square is an accurate metaphor for what twitter does.

Secondly, twitter is a failure even for the communications you describe because dissident opinions are censored and their authors banned. This makes those who take anything on twitter seriously the willfully blind ones.

Expand full comment

I'm not saying Twitter is GOOD for having these discussions. I'm saying, without normative judgment, that it's where these discussions are happening, whether we like it or not.

That means including it in a discussion of scientific discourse isn't an "error," it's an accurate description of the state of the world. NOT including it in this discussion would be an error.

Expand full comment

You're trying very, very hard to not only join "ought" and "is", but superimpose "ought" OVER "is".

As C_B points out, Twitter is where these conversations are happening. You can argue they shouldn't be happening there. You can be angry that this is where they're happening (I personally think that Jack Dorsey should be subjected to lingchi on national television alongside Mark Zuckerberg and we should go back to the days of fora). But what you are arguing is "those science conversations that happen on Twitter undergo a kind of metaphysical alchemy that makes them not REAL science conversations", which is more of an argument I'd expect from a Platonist than a Rationalist.

Expand full comment
founding

In the vast, vast, *vast* majority of scientific communication, Twitter has no interest in censoring dissident opinions or banning their authors.

And I take many things on Twitter seriously, because in spite of my generally agreeing with you that it's not the right forum for this sort of thing, C_B is right that a great deal of scientific discourse does in fact take place there and no where else (or at least not until much later and in a more heavily edited/"censored" form). Serious people, who actually are scientists, post their hypotheses and preliminary results on Twitter, taking reasonable care to make sure that what they post is accurate, and discussing other scientists' work with the same attempt at rigor.

I really, really wish they would do this almost anywhere else, but Twitter is where they do it and the value of the work is not wholly extinguished by the deficiencies of Twitter.

Expand full comment

I would agree, with the proviso that this applies to *public* discussion. In many fields email remains a crucial channel for private, pre-public discussion.

Expand full comment

Twitter can easily be controlled so as to remove almost all of the most objectionable elements, for example by creating a list of the posters (and ONLY those posters) who interest you, and only viewing that list, not the front page, not the timeline.

The fact that so few people are willing to take ten minutes to do this, regardless of how much they complain, is, as they say, revealed preference.

If you find professionals you respect using Twitter without complaint, you'll probably find that's exactly how they do it -- see tweets from people they respect, don't see anything else.

Expand full comment

Review-of-the-review: 8/10

This is a thoughtful, well-written review on a frankly disappointing subject. Reading between the lines, the reviewer has taken a not very interesting or topical book and injected it with as much ACX-bait as possible. Social media! Academic incentives! Status equilibria! Whatever happened in 1970! Huxley!

To be fair it's *good* ACX-bait; the comparison of Nature's early days to Twitter was thought-provoking, and the analysis of its later success in terms of network effects seems on-point. But there's only so much a book review can do without going entirely beyond the material of the book. In the end the review just didn't bring as much to the discussion as I wanted and so I have to rank it behind reviews of more interesting books.

But I still enjoyed reading it, of course. As always, many thanks for contributing!

Expand full comment

I was going to say this but you said it better. It’s a great review of a possibly somewhat boring book.

Expand full comment

"A fun puzzle from the social sciences: what happened in the early seventies?"

The year 1968 was widely recognized at the time as marking a momentous change in the culture, with many ensuing changes happening in 1969 (e.g., in the U.S., environmentalism, Women's Lib, Gay Lib, etc.).

Even seemingly irrelevant social arrangements changed rapidly: all-amateur Wimbledon opened itself to professional tennis players in 1968. Even in perhaps the most conservative aspect of American culture -- golf -- the tour golfers self-liberated themselves from the control of the PGA (which is dominated by teaching pros rather than touring pros) in late 1968. (That's why there is a PGA Championship, going on now, and a newer Tournament Players Championship.)

It was the Zeitgeist.

Expand full comment

I liked this because I've been interested in prestige lately. It seems like it represents a funny quirk in how laws are made. Like, The People elect Representatives, who appoint Officers, who enforce the laws made by the Representatives or the guidelines made by the Officers. So far it's very democratic. But none of those people have much expertise in the subjects about which they have to make laws / guidelines. So they have to hire Experts. And how do they know which Experts to hire? Prestige.

If The People want different laws, we can switch Representatives, who can replace Officers. But we don't seem to have any way to get different Experts. The formal political process doesn't have a lever with which to affect prestige -- like, Congress can't dock Harvard 500 prestige points for messing something up. And so academic prestige almost sort of seems like an extra, secret branch of government, with tons of power to affect the laws, but no democratic mandate and little accountability. Which is probably good in some ways, and bad in others.

So the question of what prestige is and where it comes from seems really important.

Expand full comment

This seems to be an important insight. Prestige could perhaps qualify as a sixth estate. We currently have attention, prestige, fashion, transgressivity, social debt, and similar markers which are poorly reflected in mainstream thinking about society, with much of the discussion structured around markets and money.

Expand full comment

Yep. Good point, nicely made.

Expand full comment

I'd like to point two factors that could contribute to the discussion, since the text proposes two "disruption points" in nature's history. The first, based on the publication speed, is dated from roughly the sime time that industrial printing processes were being stablished in the northern hemisphere. For example: by 1814, the london-basesd german engineer Friedrich Koenig would have presented his steam-powered printing machine, an innovation that would speed up the printing process by making it possible to print more than 400 pages per hour. However, typesetting was still a manual process and would only be automated around the 1890s with the Mergenthaler linotype machine. Even though, newspapers and daily publications were already a thing and I can see how that influenced scientific propagation.

The other point is related to the second disruption moment of the seventies. Perhaps the peer-reviewing process was a direct consequence of studies in the field of philosophy of science by Popper, Feyerabend and, more recently, Thomas Kuhn. All that these fellas had in common was the idea of the non-dogmatism of science, an aspect that would push its affairs away from authoritarian forms of sharing knowledge. This intrusion of the humanities and social studies in the scientific field might have contributed to the rise of the prestige-oriented publications (yet another form of authority, rather based on the network effect and not on the name of the scientist).

P.S. sorry for english mistakes, not my first language

Expand full comment
May 22, 2022·edited May 22, 2022

Regarding the importance of the prestige of Nature/Science, I'd say it seems field-specific to me. In my field (climate science), I don't perceive that having N/S publications is rated as highly as implied in this review - it will be seen as a big plus, but it's possible to have papers in "normal" journals that are regarded as highly. N/S papers tend to fit a particular kind of work - one that can fit in a certain length and where there is an exciting-sounding conclusion at the end. Papers with a really nice theoretical discussion, say, may just not fit there but actually be of higher value. There may also be a bit of an art to writing papers that N/S editors will like that many good scientists don't learn, and so their work ends up elsewhere, and it could be as high in quality. If I quickly think of my favourite papers, the large majority are not in Nature or Science.

Error rates also seem to be quite high in Nature/Science. This might be expected as a result of trying to publish exciting, i.e. surprising, results. But also there seem to be very bad mistakes quite often e.g. conflating correlation and causation, and there was a paper a while ago that basically made a big deal out of finding that 'A' is anticorrelated with 'B minus A' - a correction did actually get published for that one, though I don't think the paper was retracted!

Some scientists in my field are even a bit sneery about Nature/Science - one prof calls them "tabloid journals"! So only ever publishing there (even if you could) may not look that great either.

Expand full comment

Take a look at the website of Cell: https://www.cell.com/

Not what i would call amazingly aesthetic.

Might this be an opening for someone trying to start a new scientific journal today? Better webdesign?

Expand full comment

Two cents from a theoretical physicist: I never read nature. Actually I never read any journal. But I read arxiv every day. I think in my sub field such behavior is the norm rather than the exception. But it does mean that I am relying much more on my own judgement (and that of the grapevine) over the proxy signal of journal prestige.

Expand full comment

Interestingly there seems to be something of a class system here, where the top dozen or so departments generally make a big show of *not* caring about journals and trusting their own judgement (and word of mouth), whereas the lower you go down the food chain the more the journals matter.

Expand full comment

FYI:

Wage growth didn't change in the 1970s. CPI diverged from actual inflation due to the end of the gold standard and various changes in CPI calculations that made it into more of a cost of living calculation. CPI is a really bad measure of inflation.

If you look at the average size of a house in the US, the median size of a new home was 1500 square feet in 1970. By the 2000s it was over 2,300 square feet.

This is true of literally everything. People got a ton of new things (microwaves, computers, cell phones, smart phones internet, air conditioning (it existed previously but became ubiquitous, video games, smart devices, video playback devices (multiple generations! VHS -> DVD -> BluRay), CD players, surround sound systems, etc.), more things (twice as many TVs per household, more cars, etc.) as well as massively higher quality things (TVs, cars, better insulation, better windows, home appliances).

Obviously all of this directly contradicts the notion of wage stagnation... because wage stagnation never happened. Wages continued to increase in real terms.

Expand full comment

(disclaimer: I have a nature cover credit to my name. Check out 23 Jan 1992)

Like most mysteries, the answer being sought by the reviewer is hidden in plain sight – but it's answer most people will not like...

What happened in the 70s was that, in reaction to the claims/demands of the 60s, science was "democratized". Like most extreme changes, this was a slow burn (Popper is 1934, Kuhn is 1962), but by the end of the 60s we had the following all lined up

(a) many many more people enrolling in the sciences (and college generally)

(b) many, many more institutions serving these students (each with their own profs wanting to be published)

(c) various versions of the claim that science was more or less a consensus delusion rather than actual truth (along with the other dead white male type claims)

So we have two problems.

The obvious problem is point (b), there's lot of stuff being published, and lots of its garbage.

The less obvious problem is (c). How do they interact?

Well, the truth is that science is an aristocratic endeavor. Not an aristocracy of birth, but an aristocracy of taste. To advance science requires a leap into the dark, and that leap is valuable (ie gets us closer to a better understanding) when the maker of the leap sees some non-obvious pattern in the world. Many can read scientific material, a few can understand it, and a very few can see beneath it to patterns that have not yet been noticed.

But it did not fit the times in the 60s (and has not fitted the times since then) to admit this.

Instead we have seen the rise of cargo cult "scientific methods". These are all the nonsense you are well aware of: statistical tests undertaken with no conceptual basis, just pattern-fishing, by people who have no clue about the mathematics of measure theory; claims about how experiments are supposed to be done; and yes, peer review. Peer review is exactly what the superficial scientific democrat wants: it's a "method" so it's open to anyone, and it doesn't privilege being dead or being white or being male. And who could complain about having three experts (experts? well, they have credentials don't they?) look over a paper and make sure that it doesn't include any unjustifiable crazy leaps of faith. I mean, once you allow people to publish whatever random nonsense they simply intuit, you get crazy ideas like relativity or wave-particle duality, and who wants nonsense like that?

So that gets us up to today: A massive self-licking ice-cream cone of "scientists" who keep each other "credentialed" by publishing garbage (which may not actually be wrong, though it frequently is) but is pretty much utterly useless for any and every task. And this persists because, in the name of "democracy" we cannot admit the truth that the number of people who should actually be scientists, and who can make any useful contribution, is vastly less than the population who, for whatever reason, want to do something science-adjacent.

But there are still a few real scientists out there. And they still need a way to communicate the actually important results to the actual other scientist, especially without being tripped up by moron reviewers who simply cannot comprehend that science is about going beyond the textbook, not about reproducing the textbook.

Which is what makes Nature and Science and Cell (and a few other specials publications like PRL) so valuable. They are where the real scientists hang out and interact with each other, trying very hard hard to stay off the radar of the mob and its eternal desire to drag down to its own level anything sublime and superior.

Nature and Science and Cell have prestige because

- prestige (ie something aristocratic that behaves in an aristocratic manner, and is not ashamed of it) is necessary for science to function, and

- they (perhaps by luck, perhaps by seeing what was happening around them in the 60s and 70s) were willing to take on the responsibility of prestige (eg institute peer review if that's what is demanded, but don't let that actually kill what's valuable)

There were (and remain) very much other siren calls out there, easier and ore popular than prestige. You can be an "inclusive" journal. You can be a money-making journal. You can be a politically powerful journal. But go down any of those paths and you will lose prestige. Prestige comes from the aristocrats appreciating that you are performing your aristocratic role appropriately.

Expand full comment

When is it that university administration shifted from being done by faculty to being done by full-time bureaucrats? That would explain the sudden shift to wanting a 'measurable' "impact factor" for evaluating scientist's publication history

Expand full comment

As you mention, Nature made its name over the years in large part due to publications by well-respected scientists. This is a model substack is pursuing by poaching highly read authors. It gave the paper credibility and high readership. There is also the business side where Nature simply outlived its competitors.

Expand full comment