[This is the sixth of many finalists in the book review contest. It’s not by me - it’s by an ACX reader who will remain anonymous until after voting is done, to prevent their identity from influencing your decisions. I’ll be posting about two of these a week for several months. When you’ve read all of them, I’ll ask you to vote for your favorite, so remember which ones you liked. If you like reading these reviews, check out point 3 here for a way you can help move the contest forward by reading lots more of them - SA]
If you enter a major research library in the US today and request to see a century-old issue of a major American newspaper, such as Chicago Tribune, The Wall Street Journal, or major-but-defunct newspapers such as the New York “World,” odds are that you will be directed to a computer or a microfilm reader. There, you’ll get to see black-and-white images of the desired issue, with individual numbers of the newspaper often missing and much of the text, let alone pictures, barely decipherable.
The libraries in question mostly once had bound issues of these newspapers, but between the 1950s and the 1990s, one after another, they ditched the originals in favor of expensive microfilmed copies of inferior quality. They continued doing this even while the originals became perilously rare; the newspapers themselves were mostly trashed, or occasionally sold to dealers who cut them up and dispersed them. As a consequence, many of these publications are now rarer than the Gutenberg Bible, and some 19th and 20th century newspapers have ceased to exist in a physical copy anywhere in the world.
When Double Fold by Nicholson Baker came out in 2001, it was described as The Jungle of the American library system. After 20 years, the book remains universally known, sometimes admired but often despised, among librarians. The reason for their belligerence is that Baker publicly revealed a decades-long policy of destruction of primary materials from the 19th and 20th centuries, based on a pseudoscientific notion that books on wood-pulp paper are quickly turning to dust, coupled with a misguided futuristic desire to do away with outdated paper-based media. As a consequence, perfectly well preserved books with centuries of life still ahead of them were hastily replaced with an inferior medium which has, at the moment that I am writing this review, already mostly gone the way of the dodo. Despite its notoriety among librarians, however, Double Fold is little-known among the general public, even compared to Baker’s other non-fiction and his novels.
This is a shame, since the mass destruction of books and newspapers by libraries in the post-war era deserves to be better known as one of the most egregious failures of High Modernism, comparable with the wackiest plans of Le Corbusier. The story combines an excessive reliance on simplistic mathematical models, wilful ignorance to the desires of actual library-users and scholars, embracement of miniaturization and modernization as terminal values, and an almost complete disregard of 19th century books as historical artefacts. Unlike industrial farms, which can be broken up, and Brasília-style skyscrapers, which can be torn down and replaced with something else, the losses caused by the mass deaccessioning of books and newspapers from libraries were often irreplaceable.
As part of the uproar that followed the book’s publication, the Association of Research Libraries published an online anti-Baker FAQ, and in 2002, the book “Vandals in the Stacks?” by Richard J. Cox came out, presenting an attempted refutation of Baker’s theses. I have read both of these and discuss Cox’s arguments later on, but I must admit in advance that I was mostly convinced by Baker’s argumentation much more than by that of his opponents. Nonetheless, it is uncommon to have a polemical book receive a book-length response, and anyone interested in Baker’s thesis is advised to check out Cox as well.1
***
Few ACX readers probably need an explanation of why 19th- and 20th-century history is interesting, or why it is important. Most major economic, political, or technological decisions in the present hinge to some extent on the understanding of modern history, the interpretation of which depends on the work of historians. A historian will always need to work with some kind of sources, which may be archaeological or epigraphic, but for recent centuries, they will mostly be written sources on paper – manuscripts, books, pamphlets, and newspapers.
To simplify, a historian might approach these sources in two different ways. For some projects, she might consult a small number of predetermined sources – e. g. when analyzing the US response to the assassination of Franz Ferdinand, she would check what the major newspapers wrote at the end of June 1914. In other cases, it is necessary to comb through a large body of sources, without knowing in advance which ones will be useful for one’s project – e. g. if our historian analyzed the attitude towards violence in the US in the years just before WWI, she would need to check out a large number of books, newspapers, and ephemera from the time. For such a systematic search, even trying out a bunch of keywords in an electronic database might be inferior to simply leafing through the volumes themselves in a large research library.
Even though it’s possible to make educated guesses about how much a certain publication might be valuable to future historians – e. g. Barack Obama’s memoirs are more important that a Walmart catalogue – it’s very hard to predict what will be used by a future researcher and what won’t. The best guess is simply that anything might be valuable at some point. During the last two centuries, historians have broadened their interest from an almost exclusive focus on political and military history to things like social and economic history or the history of women and people of color. To a historian in 1850, information about the historical prices of bread, the use of cutlery, or the travel speeds of different kinds of merchant vessels might seem like footnotes to real history at best. A century later, Fernand Braudel used nothing but information of this sort to weave together a groundbreaking new history of medieval and early modern Europe. It is reasonable to assume that 21st- and 22nd-century historians will keep expanding their field in similar ways.
Perhaps the best illustration of the constantly expanding area of historical interest is the huge trove of medieval Jewish documents that was discovered in the geniza at Cairo in the 19th century and mostly brought to Europe. In the beginning, only Biblical texts attracted much interest, especially ones that had been presumed lost in the original Hebrew. Later, during the 20th century, the researchers suddenly became excited by the documents left behind by medieval Jewish intellectuals – previously unknown manuscripts of works by Maimonides and Judah Halevi, some of them written in their own hand. A generation or two later, new scholars looked into the everyday documents left behind in the geniza – receipts, contracts, and IOU’s, and used them to construct a social history of medieval Jews in Egypt. As Adina Hoffman and Peter Cole write in the book Secret Trash, there is only one constant in the research into the contents of the Cairo geniza: whatever had been left aside as boring and irrelevant by one generation became the cornerstone of the next generation’s scholarly pursuits.
Microfilm
The story of Double Fold might be said to begin in the 1930s with the advent of microfilming. The idea of photographing documents to make them more portable had been around at least since the 1870s, but it took 60 more years until microfilm technology was sufficiently advanced to become attractive for libraries. The basic idea was simple: you took pictures of every page of a book, put them together into a roll of film stored in a small box, and when someone wanted to “read” the book, they put the film into a large TV-like device that magnified the image onto a screen, with a pair of buttons that you could use to navigate left and right.
Baker claims that microfilm got a big boost during WWII, when it was often used by spies to hide documents, and by the US government back home to disseminate military information. This allure continued during the Cold War years, and it helped that many of the librarians keenest on microfilm were ex-military men who wanted to apply what they had learned in the Army to their civilian jobs. Microfilms were small and felt modern, but unfortunately, many of the advantages they presented to the military were not exactly advantages for libraries as well. Baker quotes Vernon D. Tate, an Army microfilm specialist who went to become chief librarian at MIT:
Books may not be blown to bits or easily consumed by fire; microfilms if capture is inevitable can be rapidly and completely consumed, and as easily replaced through the making of prints from master negatives.
Apart from being flammable, microfilms also had several more commonly encountered disadvantages. Baker describes reading them as a “brain-poaching, gorge-lifting trial,” especially when the images had a poor resolution.
You feel as if you’re mowing an endless monochromatic lawn, sliding the film gate this way and that, fiddling with the image rotation dial and the twitchily restive motor switch. If you have a date and a page number, you look that one citation up and leave; you’re rarely tempted to spend several hours in the daily contextual marsh. ‘Certainly the patron’s desire to browse through back issues of newspapers is almost completely gone – people rarely browse through microfilm’: so wrote E. E. Duncan in Microform Review in 1973.
Not all libraries might have attached flight sickness bags to their microfilm readers like a Canadian library mentioned by Baker did, but it is telling that microfilm readers never became popular outside of libraries and government institutions, despite having been in use for over half a century. Baker mentions one scientific journal that was published only on microfilm, which is actually still more than I would have expected; I’m unaware of any book ever published exclusively on microfilm.
Rebecca Rego Barry was one of the researchers who benefited from a treasure trove of newspapers that had been saved from dispersal by Baker immediately before Double Fold was published. She used them to sift through a decade’s worth of Herald Tribune, searching for articles written by a columnist whom she was analyzing for her thesis. “Could the articles be found on microfilm? Theoretically they could, with another year and an extra set of eyes, if whoever had microfilmed it had done a decent job in the first place.”
The “decent job” part turns out to be really important. Because you need a machine to read them, microfilms are harder to casually inspect for quality, which gave them the nickname “the invisible product.” Baker enjoys listing examples of lazy operators skipping pages and producing incomplete films, but the really big issue is technical. If you aren’t very careful when developing the microfilm, “residual hypo” – image-processing chemicals that weren’t rinsed away during processing – will damage the microfilm and blur the text, often beyond the point of legibility. Put all this together and you get to the number of 50% of all received microfilms that were rejected by the Library of Congress in the mid-1970s. The problem? Over half of these rejected microfilms weren’t returned to the vendor, but were accepted into the Library’s collection despite their faults, such was the hurry to modernize.
Lastly, microfilms themselves don’t age very well. Just like paper, there are different kinds of plastics being used for microfilm (as well as microfiche, which is a lower-resolution version of microfilm, and similar-but-abandoned technologies such as Microcards), and Baker lists the ways in which each of them is sensitive to damage. The main form of damage is fading due to prolonged light exposure, but even worse is what can happen if all that focused light on a small strip of film causes the temperature to increase too much, which can lead to the film basically getting blotted out. Sometimes, all of this can lead to ironic consequences, such as when Baker tried to consult the papers of Verner Clapp, the number-two person in the Library of Congress during the 1950s and one of the most passionate supporters of microfilm.
All Clapp’s notes are on paper, easily read today. Clapp’s CIA file, on the other hand, is an unfortunate victim of the Cold War mania for micro-preservation: it looks to have been inexpertly filmed at some point, and it has undergone a severe fading, as microfilm does when technicians don’t take care to rinse off the hypo fixative. The copy that the CIA sent me is poignantly stamped with the words BEST COPY AVAILABLE on almost every undecipherable page. Some of these pages are, though uncensored, completely unreadable.
Of course, it would be easy for none of this to matter at all in 2021. Despite its downsides, microfilm had the major advantage that it could be copied at will, which made a bunch of rare items suddenly accessible to libraries all over the country. Baker often stresses that he has nothing against the technology as such, as long as it is used merely to supplement paper collections. As it happens, however, this is not what happened. What happened instead was that microfilm became part of the plan to get rid of paper almost entirely.
Brittle paper
The second key part of this jigsaw is paper deterioration. Paper from the 18th century and earlier usually ages quite well, the reason being that it was produced from rags, i.e. old clothes and other discarded textile. The upside of rag paper is that it was made from 100% recycled material, while the obvious downside is that there is a limited supply of old rags in the world. Around 1850, this led to the introduction of wood-pulp paper. Wood is plentiful, but using it to make paper usually required procedures that resulted in a slightly acidic final product, and the acids slowly damage the cellulose fibers of which paper consists. This is why paper made after 1850 often goes yellow over time, and is much more brittle than either ancient or modern rag paper.
Before reading Baker’s book, I had heard the story about the inevitable slow decay of wood-based paper a bunch of times, and it was almost always told as a categorical truth: wood-based paper is trash, it will literally fall apart sooner or later, and the only way to really preserve it are semi-experimental treatments to remove the acids from the paper. I usually scratched my head at this, since I know from my own collection that there are lots of different kinds of paper. There are plenty of 100-year-old books on wood-pulp paper which look brand-new, or else the paper is slightly yellowed at the edges but otherwise OK, or perhaps the paper has gone entirely yellow and is obviously brittle, but as long as you treat the book well, it isn’t going to fall apart, and you can read it a number of times without any major damage. I always thought that I’m somehow affected by survivorship bias, and didn’t give the matter too much consideration.
It wasn’t until I read Double Fold that Baker gave me the answer to this conundrum. Yes, Baker contends, paper does go brittle over time, but the reaction proceeds much more slowly without oxygen and light, which means that a closed book on a shelf will age at a negligible rate (loose sheets of paper exposed to the air, however, will quickly turn yellow). Also, once the chemicals on the surface of the paper have reacted with the air, the overall reaction will slow down and the book will age more slowly, rather than more quickly, as the time progresses. Most importantly, paper can be brittle in the sense that it will quickly tear, or fall apart when crumbled, but this isn’t relevant to the way books are used in a research library. As long as you use a 19th-century wood-paper book as you’re supposed to (that is to say, just as carefully as you would consult a 19th-century rag-paper book), it will survive without much trouble. There’s no reason why a somewhat brittle yellowish book couldn’t still be on the shelves a century from now.
If all this is true, how come we’ve come to believe that wood-pulp paper is terminally endangered and turning to dust? Baker’s answer is: bad science. Most of what we know about the long-term fate of paper comes from studies on accelerated aging, where researchers usually treated paper at high temperatures (i.e. baked it in an oven) until it broke down completely, and then used the Arrhenius equation or its derivations to extrapolate how long it would take for the same process to occur at room temperature. Of course, this is just a model, and it has a substantial downside that it was never actually tested against reality; as Baker pointed out (and Cox doesn’t object to anywhere in his refutation), there had never been a study performed over a longer period of time that would actually demonstrate how paper ages naturally, and how much strength it loses over decades in the library, rather than minutes in the oven.
Accelerated aging tests are difficult to do on each book individually, so in order to quantify the fragility of their books, librarians came up with a much simpler test – the “double fold” test from which Baker’s book takes its title. To do a double fold test, you take the corner of a book, fold it, press down the fold, unfold the paper, and fold it again to the other side. You keep doing this until the paper snaps. For each pair of folds that it endures, it gets one unit of double fold value (dfv): e.g. if it breaks after the first fold, it has a dfv of 0.5. Each library has its own threshold of how few folds a book must endure to become officially brittle, but the official implication of the fold test is always the same: a book with a low fold value is at the end of its lifespan, and the only thing we can do for it is some sort of palliative care, if not euthanasia.
Baker will have none of this. He agrees that while the fold test captures some aspect of paper quality, it doesn’t have much relevance to the expected lifespan of books, or the number of uses they can endure before some sort of catastrophic collapse. Instead, Baker proposes, half-seriously and half-in-jest, a new means of testing the durability of books: “the Turn Endurance Test.” You take a book, open it in the middle, and flip the page, as you would when reading. Then you flip it back. Baker applies both tests to a book from 1893 which he happens to be reading at the moment. The double fold test produces a value below 0.5 – a death sentence in most libraries. The Turn Endurance Test, however, shows that the same book can endure hundreds of turns of a single page without any kind of damage.
That’s not how the librarians saw it, though. Baker chronicles how the rhetoric about brittle paper progressed during the 1970s and 1980s and became increasingly extreme. At first, brittle paper was endangering the long-term survival of modern books. Then, it was an immediate threat to their survival. Then, the books weren’t just falling apart anymore: they were literally turning into dust. By the late 1980s, the catastrophic rhetoric had reached its apex: “10 million books in major American libraries will not survive this century” was written in 1988; “more than a quarter of books in libraries will not survive this century,” in 1990, ten years before the century’s end. Needless to say, they did survive – or rather, would have.
As long as the books were merely described as brittle and fragile, one might still propose to save them through the traditional means: restricting access, careful handling, and conservation, combined with non-destructive imaging to reduce the number of researchers who needed to consult the originals. However, if these books were literally on their death bed, about to disappear into thin air no matter what we did for them, then…well…there was no reason why we should do anything more for them. We might as well chuck them out.
Shelf Space and Book Destruction
The 1988 film Slow Fires, which turned its director Terry Sanders into a household name in American libraries, was one of the cleverest pieces of anti-paper propaganda ever made, and Baker devotes considerable attention to it. The movie starts slowly, with scenes of crumbling marble inscriptions and papyri, accompanied by sorrowful music, followed by clips from interviews with famous scholars, all of whom emphasize how much they value working with primary sources. In the following scene, we are led through the Florence library in the aftermath of the destructive floods of the river Arno, and through the ruins of a nameless burnt-out library, accompanied by more of the same solemn music. A sensitive viewer might have shed a tear at these scenes, and it looks obvious that this is a movie about the value of preserving our cultural heritage, and the importance of historical artifacts.
In the scene that follows, we enter a preservation department of a major library, where the microfilming of a rare 1920s bound newspaper is just underway. The worker explains the microfilming process to us, while she slowly slashes the volume’s binding and proceeds to cut up individual pages and feed them into the filming device.
Wait, what?
The process in question is called guillotining a book, and according to Baker, it was the logical outcome of the paper brittleness myth, combined with the passion for microfilming. What made these two deadly was a secret ingredient – the desire to free up shelf space. There were few librarians in history who did not at some point complain about the lack of space. However, this particular problem always had two different solutions: either increase space, or reduce the number of books. For large research libraries, the first option was always the default one, since it was obvious that with the growth of human knowledge, the number of books necessary for future researchers would grow as well.
All of this changed after WWII. In a wave of futurist ideology that swept across US libraries, it suddenly wasn’t desirable anymore to keep expanding and piling up paper. Just like computer-manufacturers kept trying to compress their machines, a good modern library was suddenly a library that kept miniaturizing. If not literally to get smaller over time, the library of the future should at least try to keep its size constant, no matter how large the influx of new publications might be. Of course, this meant that even in the largest US libraries, there would be increasingly little room for paper publications.
Baker quotes Fremont Rider, a poet-cum-businessman-cum-librarian who pioneered Microcards (the unsuccessful precursors of microfilm) and whose work had an immense influence on later Librarians of Congress. A library which has outgrown its building could simply buy another building, wrote Rider, but alas, increasing storage space is just “a tacit confession of past failure” – hence, librarians should feel ashamed of themselves for relying on such low-tech solutions. He then introduced the concept of a Microcard, and stated that, with this technology, “for the first time in over two thousand years, libraries were being offered a chance to begin again.” Such a technological shift would produce a saving in storage costs which “came gratifyingly close to 100%” – assuming we got rid of all the books, of course.
It didn’t require a huge leap of logic, then, for Rider to propose that Microcards should be made by cutting up the books in question before filming them, since there won’t be a need for these books afterwards. Baker follows Rider’s intellectual genealogy through Verner Clapp at the Library of Congress, who wrote a eulogy to Rider in a 1964 library science textbook, and through the network of Clapp’s own disciples. One of Clapp’s protégés, John H. Ottemiller, wrote pointedly in the 1960s that the library of the future has a “need for putting greater emphasis on the discarding of materials rather than their storage.”
Of course, microfilming a book isn’t free, and microfilming an entire library can be much more expensive than just storing it somewhere. After a major cost-benefit analysis came out in 1957 which disfavored microfilm, Clapp responded by having the Library of Congress commission its own study in 1961. The conclusion he got was that assuming a library could sell enough copies of its microfilm, the process would pay for itself – but only if they sped it up by cutting up the books and filming them page-by-page. Consequently, microfilming could be performed without any downsides – none, that is, “except the destruction of the text”.
Thus sprang into action the ominously named “preservation by destruction” (a phrase actually used by its proponents, not my or Baker’s invention). Baker likes to point out the Orwellian way in which modern-day book destroyers hijacked the very language of book salvaging. The microfilm departments in libraries were named “Preservation Departments,” in the vein of “Ministry of Peace” and “Ministry of Love.” Of course, the public was mostly unaware that the primary task of a Preservation Department is to cut up books and trash them afterwards. Inside the library, there often arose tensions between the people working in conservation departments, whose job was to carefully restore old books, and those in “preservation” departments, whose job was to destroy them. Baker speaks with an employee in a book conservation department, who recalls that the microfilmers were often referred to unflatteringly as “thugs” – in return, the book restorers got themselves the nickname “pansies.”
Once the system was in place, it fed on itself. The logic was as follows: a library that bought a microfilm imaging device needed to use it as much as possible, in order to recoup the costs. Part of the profits came from sales of microfilm to other libraries, but a more certain profit came from the reduction in storage costs. Of course, if the books were going to be discarded anyway, it was hard to resist cutting them up to reduce filming costs even more. And if everyone involved believed that the books were terminally brittle anyway, there was no need to feel bad about any of this – they were on the death bed anyway, and if they only had one use left in them before they spontaneously disintegrated, then that last use better happen in the microfilming department.
How did Slow Fires get away with showing the dismemberment of rare items to the public? By pretending that nobody wants to be doing any of this. “Nobody likes microfilm,” says one of the scholars interviewed by the crew. In another shot, the historian Barbara Tuchman explains how she did research on one of her books by combing through old microfilms – she would have much preferred working with paper books, but given that she only had microfilm on offer, she accepted this as a fact of life and pulled through. Even the worker who is filmed cutting up the old newspapers indulges in a moment’s reflection. “It kind of bothers me to guillotine newspaper collection, because I know the actual papers are not going to go back on the shelves,” she notes. The hesitation does not last for longer than a few moments, though: “but to contain the information on microfilm is the ideal way to preserve the newspapers.”
Of course, it wasn’t the ideal way. Baker’s frustrated attempts to get America’s chief librarians to explain their discarding policy feel like an endless progression of motte and bailey. The motte is that terminally endangered books need to be microfilmed to preserve their intellectual content; the bailey is that libraries should ditch paper books and switch to microfilm in order to modernize and miniaturize. Baker notes that several newspapers, such as The New York Times, produced a few special durable rag-paper editions every day, specifically for libraries. All for nothing: the libraries ended up ditching these volumes nonetheless. Patricia Battin was the president of the national Commission of Preservation (!) and Access and one of the most ardent supporters of microfilm:
’Yes, I’m sure that there are books that were microfilmed that probably were not that brittle,’ Battin says now. ‘We had great debates among the populace as to whether you took the collection approach or the individual-copy approach, and decided for the initial filming grants that the collection approach made the most sense.’ To me she quoted the French adage: ‘The best is the enemy of the good.’ Of course, the bad can be the enemy of the good, too.
What did we lose?
Baker spends a considerable amount of time proving that microfilming was a losing proposition in the financial sense. He’s probably right, but few people care about financial malpractice in libraries enough to read 300 pages about it. Instead, 20 years after its publication, the value of Double Fold hinges entirely on the value of historical material that was lost from US libraries during the microfilm craze, and that is difficult or impossible to replace. So, what did we lose?
1) Even though microfilm was almost exclusively a black-and-white technique, a lot of the material discarded in favor of microfilmed copies was in color. A major part of Baker’s book is the story of how he saved a large amount of historic newspapers that had been put on auction by the British Library and were, in many cases, the most complete print runs still in existence. Among these was the New York “World,” an illustrated turn-of-the-century newspaper which once had a readership of one million and which had catapulted Joseph Pulitzer into fame and fortune. Many of the issues Baker acquired were possibly the last in existence, and in Double Fold, Baker poignantly juxtaposes pictures of the original full-color illustrations with the same images in the microfilmed editions of World (black-and-grey blobs, barely recognizable as illustrations).
Notably, Cox argues in “Vandals in the Stacks?” that trashing these illustrated newspapers had been a mistake and that librarians should have kept them around in the original. He also argues that discarding things should be a necessary part of being a librarian and that librarians are perfectly capable of judging what needs to be discarded and what doesn’t, without the interference of outsiders like Baker. He doesn’t seem to be aware of any contradiction here.
2) When libraries each have their own copies of a certain book or a newspaper, there is a high degree of redundancy involved. Major newspapers in particular would usually print several editions a day; each library would only end up receiving and storing one of these. More importantly, each library would randomly lack a few issues here and there, but you could probably find these in the next library if you needed them.
Conversely, the whole point of microfilming was that only one library produces the microfilm and then sells copies to all the others, which can now happily discard their own print runs. Since Library of Congress regulations officially declared a microfilmed print run of a newspaper complete even if it was missing “a few” issues for each month, this means that plenty of officially sanctioned microfilmed print runs had holes in them. If a certain issue wasn’t in the possession of whoever had done the microfilming, it would slowly disappear from the record entirely, as everyone else would get rid of the bound volumes in favor of microfilm.
It’s interesting that Cox’s book is centered on a refutation of this single point. His main argument is that libraries can’t keep everything – even keeping a single copy of every historical US newspaper (or other publication) in some library or other in the USA would be so taxing as to be literally impossible. He doesn’t explain how libraries managed to find enough money to do exactly this up to the 1950s (despite the US being a much poorer country back then, and with a much smaller percentage of GDP diverted to public services). In the end, he forfeits his entire argument when he mentions in passing that working in Austrian libraries is relatively tedious because they hold so few items in microfilm. Indeed, at least in Europe, librarians seem to be managing the impossible task of storing a few copies of every historical publication quite well.
3) Obviously, an image does not in any way preserve the material aspect of the paper or the binding. If you’re researching the different kinds of paper used for newspaper production in the 19th and 20th centuries, you’re out of luck. Baker mentions two particularly annoying examples. The first was a newspaper edition from 1830 which claimed to have been printed on an experimental run of wood paper, decades before wood-pulp paper became common. Ironically, the newspaper in question was mentioned in a famous 1940s textbook on papermaking, but the author of the textbook was unable to do any chemical analyses, since the librarians jealously guarded the volumes and wouldn’t let him take any samples. When Baker rang up the library in question in the 1990s, they told him that they had ditched the newspapers.
The second example is even more interesting. In the 1850s, the US imported rags for paper production from Egypt on several occasions, and several journalists at the time reported that the deliveries had consisted of mummy wrappings. At least one newspaper, the Syracuse Daily Standard, proclaimed to its readers that it was being printed on mummy paper. This could in principle be verified by molecular analysis, but unfortunately almost all the libraries which had carried print runs of the Daily Standard had thrown them away. It’s possible that this helped us avoid the mummies’ curse, though in my opinion, getting recycled a second time made them even angrier. Maybe having lost so much historical material was part of the curse.
4) Most notably, an old book or newspaper isn’t just a source of information, it’s also a historical artifact. A downside of Baker’s book is that he largely accepts the terms of the game as dictated by the librarians, and focuses on the informational value of the destroyed volumes. It’s not that libraries were completely oblivious to the inherent value of old books, but rather that they established a dichotomy: on one side, there was a small number of “rare” books with obvious historical value, such as inscribed first editions and Renaissance-era books, and on the other side, there was the mass of ordinary books, which were supposed to have value exclusively as vehicles for words and pictures.
Baker counters that this is a wrong way to look at books, since there is no clear demarcation line anywhere: every book is, to an extent, both text and artifact. If nobody counters the idea that a pamphlet from 1700 should be preserved for its own sake, even if there is a perfect electronic copy available, then the same should also hold for a rare pamphlet, book, or newspaper edition from 1900. In fact, Baker’s problem is that he doesn’t have much material to argue against, since the great proponents of microfilm had mostly been so oblivious to this issue that they didn’t even bother mentioning it.
He does, however, manage to find a quote by Patricia Battin, which could serve as the epitome of the High Modernist mindset in American libraries: “the value, in intellectual terms, of the proximity of the book to the user has never been satisfactorily established.” Everyone might have hated microfilm, everyone might have preferred working with the original historical artifact – but as long as the value of the artifact wasn’t satisfactorily established, there was no reason why not to trash it.
***
At the time when Baker was writing Double Fold, microfilm as an information medium was already on its way out, and most American newspapers and books had already been transferred to microfilm anyway, which means that it wouldn’t have made much sense for anyone to microfilm them again. Microfilming was quickly giving way to digitalization, but it was fairly easy to produce digital copies from microfilm (rather than from the paper originals themselves). Why not let bygones be bygones then, especially since Baker himself admitted that the destruction of books and newspapers had abated during the 1990s, thanks in part to the “abolitionist” campaign of a few scholars and librarians, led by Thomas Tanselle, a professor at Columbia.
Baker was worried that unless we quickly learned something from the mistakes of the postwar decades, we were bound to make the same mistakes again, and even more egregiously so. It is possible to scan microfilms to produce digital editions of books and newspapers. However, because of all the problems outlined above, from poor legibility to deterioration of film over time to missing pages or incomplete print runs, we often prefer to use the original source once again. The librarians who lobbied for their collections to be microfilmed loved to emphasize that this was a lasting solution, but a mere couple of decades later, Baker notes, we might have to do everything all over again. The only difference is that in the postwar decades, there were still a lot of historical books and newspapers around to cut up and microfilm, whereas at the time that Baker was writing his book, many of these publications had remained only in a single copy, or even disappeared in printed form entirely. Guillotining books is unnecessary in order to acquire a good image, but it had already been unnecessary in the 1950s and 1980s, and that didn’t stop librarians from practicing it nonetheless. Baker was worried that if we guillotined newspapers and books again during digitalization, we would be destroying even the last few survivors of the post-war carnage.
Even more importantly, for every book that librarians guillotined during microfilming, several other copies of the same book were ditched by other libraries around the country after they had bought the microfilm produced by the first library. In some cases, these books were sold and thus preserved by collectors (although in the case of bound newspapers, even when these were sold, they were usually cut up and resold piecemeal by the buyers, which means that they ended up dispersed beyond anyone’s ability to collect a full print run ever again).
Many other books were, however, simply trashed. As a combination of bizarre rules, bureaucratic stubbornness, fear of publicity, and simple inertia, it’s apparently very rare for American libraries to simply donate discarded books to the public. Sometimes the books are sold, but usually they are thrown into the dumpster, regardless of their value. Baker mentioned the case of a researcher who tried to take home a copy of a rare book after it had been guillotined and filmed by the Library of Congress; she was told that this is against the rules, and the book was trashed. On the antiquarian book market, copies of the same book are worth around $2000. Judging by what librarians themselves write online, the dumpster has apparently remained the default option for getting rid of discarded books to the present day.
Twenty years after the publication of Double Fold, the frequency of library books being guillotined for imaging is probably lower that it was in Baker’s time, or at any point of WWII, with the main reason being that there are relatively few books around that haven’t yet been imaged by someone. It’s generally cheaper to pay for someone else’s scans than to do the scanning yourself. However, the very ubiquity of online resources also provides an incentive for libraries to continue purging their collections and trashing the unwanted material. There are plenty of reports of major libraries trashing their books, though the public seldom learns which books were trashed, and how valuable they might have been.
In this sense, all of Baker’s warnings – the losses we face when discarding a variety of paper editions of the same publication, and replacing them with a single digital copy – are still very up-to-date. The only difference is that because libraries nowadays contain so much material that was printed, from the late 1980s onward, on acid-free paper, the brittleness of paper is less useful as an excuse for large-scale deaccessioning. Instead, the main excuses nowadays are lack of space, the presence of digital copies, and the claim that nobody will ever need these books again, anyway. Double Fold provides plenty of reasons why these books and newspapers will continue to be sought after, and why the copies will never be perfect substitutes of the original.
1 Unfortunately, only part of “Vandals in the Stacks?” is actually spent refuting Baker’s arguments. Instead, Cox goes off on a number of tangents, including a long refutation of an unrelated essay by Baker from 1994, several complaints about Baker not discussing archives and archivists in Double Fold (Cox is an archivist by profession), and an entire chapter of Cox’s own professional autobiography, whose relevance to the topic of the book is never explained.
Share this post