I was slightly disappointed the review of the Divine Comedy didn't make it to the final, but I was very pleasantly surprised that the Ballad of the White Horse did, so on balance I'm happy!
Though put me in the small pile agreeing that the favourites over on the prediction markets are not books I voted for. Well, such is democracy!
I don't remember the rant, but I do remember thinking while reading the review "You still haven't told me why I should care to learn about this guy". Okay, that's the job of the book, but the review was just - guy has ups and downs in Japanese politics/civil service, hops aboard the Westernisation train as a means of job security, ends up pro-modernisation.
And so? was my reaction. The review didn't make me interested in the man or the situation, and I mean, the Meiji period is *fascinating* in how it set the course for Japan that ended up with them deciding the Second World War was an amazing concept they wanted in on. but the review actually *bored* me, because I didn't care about the guy or what he was doing.
If we had votes for "this review made me want *not* to read the book", then that would have been my Number One.
That one had the problem that the reviewer was a little too uncritically receptive of a 19th century work, but the major problem was the example they gave about the Swiss village undergoing a possession craze.
Of course we nitpickers jumped all over that, and the true facts were vastly different to the glib, 19th century Enlightenment/Protestant rhetoric of "dumb backwards peasants and their superstitions", so that really blacked their eye for them in terms of 'how trustworthy is this review?' because the facts contradicted the scenario as they presented it (the kindly big city doctor bringing the light of Science to dissipate the darkness of Superstition that the benighted were wallowing in and treating the mentally ill with kindness and niceness was not that kindly, and had no problem using police and soldiers to lock up people in the local hospitals until they were 'cured', for instance).
If they had it to do again, I would suggest leaving out the Swiss exemplar which in fact was not an example of the point they were trying to make, and concentrate on the book and the author with a bit more background on "why did the Victorians go all-in on Science! and Progress! as their guiding stars?"
Fortunately this year I thought ahead and put a like on the ones I knew I'd probably vote for, so I didn't actually have to remember anything when voting time came.
I hadn't read the preamble and was super happy when I realized that it is approval voting. I found it very hard to decide between my four favorites, and had already considered flipping a coin between them.
Not sure how to feel about Approval Voting making picking winners less stressful coinciding with pretty thin pickings. There's only three reviews I approved of enough to vote for, as "real book reviews", but none were great...and they wouldn't have made even that cut if included in prior years' contests, for me. More excited this year to see the reveal of which one(s) was(n't) written by Scott than about the actual winner.
Many people have said they were disappointed in this year's. I don't have a good idea how to solve this (other than maybe cancelling the "affirmative action" policy for weirder reviews, but I don't think would go far). Maybe the community is book-review-ed out?
To be fair, if it's a book about linguistics, Chomsky is the 800 lb gorilla in the field and you can't go far wrong assuming he'll be in the book, if not the subject of the book.
I do get what you're saying, but on the other hand, would you really not read a book if it was published in 1910? Even just seeing how views on history/science/ethics/best system of governance/poets whose names will be writ in bronze change over the decades is valuable knowledge, to realise how we get from A to B while claiming (or disclaiming) that Thing Y has always been this way and that "all right-thinking people agree that Z is how it should be done".
I definitely don’t feel book-reviewed-out, I found the exposure to the reviewed works valuable and ended up reading three. I don’t know if a heuristic can be applied to selecting finalists to improve the process; in fact, applying a heuristic may reduce the utility of the process compared to going on pure gut instinct about which reviews are the best.
I think by “review-ed out” he might mean that all the people who are going to write good reviews may have already written one, and may not want to write another of this sort for a while.
Yeah, my interpretation of recent review quality is all the true multi-metric "A" material has been run through, and until more is generated by the few capable of doing it, we're in a realm of higher variance strategies that really ping the meter for small niches of people here and there, but leave most unfulfilled.
I didn't get that sense. To me the general standard was higher, even if the best reviews didn't reach the heights of previous winners. I solidly approve of at least a third of the finalists, even if none is a Njal's Saga review.
I'm not going to do this for purely selfish reasons - having fourteen weeks where there's a guaranteed "free" blog post that I don't have to personally write gives me a bit of a summer vacation.
I thought they were all good, such that my confidence that "if I allocate time out of my busy life to read the next book review I shall not be disappointed" never waned.
But I wonder about the selection process. Were there enough votes for each to properly identify the cream of the crop? Especially since people (can) self-select which reviews they sample?
I tried to combat this bias by sampling from the pool of reviews randomly. Unfortunately, the third one that came up for me was a super-long review of Sadly, Porn. I felt morally obligated to finish it, having committed to the process, and it was excellent, but given the complaints last year about review length, I couldn't bring myself to upvote it and inflict it on everyone else.
All of which is to suggest, would it make sense to feed people random reviews during the screening process?
I think there may be a point where enough people have reviewed the books they're truly passionate about (which will often be the best reviews) that the pickings grow slim. Maybe switching to a biennial (with some other contest on the off years?) would improve things.
I don't think the quality of the writing dropped off all that much. I think most people are disappointed because none of the books/topics really struck a chord that led to wider discussions in the same way that reviews like the one on Georgism did in the past. That one really set off a revival of discourse on that topic to where basically everyone in the ratsphere knows what you're talking about when you mention land-value based tax policy. For whatever reason none of this years book reviews sparked that kind of viral community intellectual discourse. Nothing connected deeply with the rationalist egregore so to say. Likely because none of the topics sparked a ton of new debate or insights within the community. For instance Real Raw News was interesting and shed light on an area of our world not everyone knew about, but it at best shifted some perspectives, I don't think it was revelatory in the same way as past winners. Or Two Arms and a Head gave some brutal new insights into paraplegics, but I'm guessing the majority of the community is already pro-euthanasia so it at best solidified most opinions instead of inspiring much debate or introducing anything super revelatory.
I sort of wish the Ballad of the White Horse struck more of a chord because I personally found it rather enlightening in certain respects, but I can understand how an epic poem that generally lays out the case in favor of virtue ethics and conservatism didn't go over that well with this crowd.
I had never heard the word “egregore” but was surprised to see that the concept is frequent enough that Google didn’t try to autocorrect it to “egregious” (the way my phone did when writing this comment). I forget if the word “zeitgeist” originally comes from Hegel, but it seems to be a similar concept to that.
I agree. It's not so much that the quality of the writing is lower this year - but I found the themes and discussions more interesting last year. Several of the "reviews" were just excuses to talk about something the author really wanted to discuss, which I was fine with. For example the one on nuclear safety, I really liked that one as well.
There's probably room to improve the first round screening process, but any suggestions I'd give there are highly speculative because the data that would inform an improvement process is (correctly) private.
One concern that I've seen other people bring up is the possibility that luck of the draw in terms of first-round readers may have an oversized effect. I.e. if a review only gets one score or only a handful of scores, then one or two of those readers being an unusually tough grader is going to have an outsized effect on the average score. Or one or two readers being an unusually generous grader. Or, particularly for weirder or otherwise controversial reviews, where there's likely to be a strongly bimodal distribution of how much people like it, that review might be a top finalist or left deep in the slush pile depending on which side of the distribution were the people who happened to rate it.
Of course, if every review is getting dozens of scores, I'm sure it averages out. But if the number is small enough for this to be a potential issue, there are a few potential mitigations I've seen proposed.
The low-hanging fruit for the tough-grader problem is probably including a scoring rubric or other heuristic on the form for how to translate subjective quality into a number; something like guidance on scores of the lowest-rated finalists, average finalists, and winners in last year's contest would be helpful. A more advanced solution, which would only work if enough first-round readers are scoring multiple reviews, would be to have some sort of stack rating, either on the form or by having a script post-process the form data to find scores submitted by the same person. If you have enough data for "Reader X liked Review A better than Review B", then you can programmatically generate ELO ratings or something similar base on it. I expect I could help with the scripting if you're interested in doing this.
For divisive reviews, the low-hanging fruit is probably to use some kind of statistical signal for high variance in scores (standard deviation, difference between arithmetic and geometric mean, etc) as an additional output of the first round scores. Reviews with signs of being controversial could either have their average ratings adjusted down or be subject to a bit more scrutiny during your subjective review step.
Another thing that might help both with divisiveness and number of reviews in general would be to have explicit "did not finish" response on the form, with guidance to use it if you stopped because you got bored. This should help give readers permission to bail out early and move on to another review if they start reading a long review they find tedious. Also should help with divisive reviews that get a higher average rating than they otherwise would have because people who them are more likely to stop reading and not rate it.
>For divisive reviews, the low-hanging fruit is probably to use some kind of statistical signal for high variance in scores (standard deviation, difference between arithmetic and geometric mean, etc) as an additional output of the first round scores.
I like this idea: reviews that inspire very low and very high scores may be better finalist options than their averaged out score would imply. A "sort by controversial" option might get us some good reviews!
What I think would really solve the problem is if there was a tool where you hit a button and it gives you the name of the review that has had the least number of votes so far. Just having that available would mean a lot fewer reviews that have only a few votes. However, I don't know if that would be technically feasible. Even a tool that just showed the number of votes (not the scores, just the number of people who have voted) for each review would even things out a bit.
>reviews that inspire very low and very high scores may be better finalist options than their averaged out score would imply
I was actually thinking the reverse. I noticed that with this year's finalists, there were several where the comments had a strong mix of people who absolutely loved them and people who absolutely hated them. And the people who hated any given review were often also complaining about the overall quality of the finalists.
I'm not altogether opposed to divisive reviews making the finals if the average reaction is strong enough, but I suspect there may be a problem with divisive reviews having their scores more influenced by luck of who scores them in the first round (causing both Type I and Type II errors), and also with their scores being artificially inflated by first-round readers who DNF them without leaving a score.
IIRC I waited until every book had at least ~5 ratings, then added ~3-4 "dud" ratings that were at exactly average to all of them, so that books that lucked into getting a few great ratings didn't beat more-rated books.
Scott, that doesn't work. Let's say one book had 5 real ratings, and another got 10. Five real ratings, like 5 coin tosses, are a smaller, less representative sample than 10 coin tosses. By adding in 5 more dummy ratings you don't get another bite of the real variance in the data. You just keep the same average result, but it looks more valid if people see that it's based on 10 results.
As it happens, my review last year was one of the ones that had had so few reviews by midway through the voting that it was on a list you put out of reviews without enough ratings, requesting that people read some more from that list. Now I wonder whether your request brought its total number of reviews up to average. Did I really got as many real votes as are showing on the page of results?
I don't know what to do about the problem of sparsely rated reviews, but your hack doesn't work. Imagine a simplified and exaggerated example with coin tosses. You want to know whether a coin is balanced and fair, so you toss it twice, and it so happens you get 2 heads. But you know that coin didn't get a fair chance to demonstrate the real distribution of its results, so you add in dummy results that have the same average as the real result. So you add in 98 more heads, just to be generous. Now you have a coin the turned up heads on a hundred tosses (98 of which were dummy results). If a real coin was heads 100 times we'd be damn sure it was not evenly balanced and fair. But we can't say the same about the one with 2 real trials and 98 dummies that have the same average result as the 2 trials!
I think it does work. You're right that it doesn't add signal, but it does compensate for variance, which is what I want.
Imagine a hundred reviews, all of which have true goodness level 8, but where there's a lot of noise. Some have 100 ratings - since the noise evens out, they all get score 8. Others have 1 rating each. Since there's noise, some of these get rated 6, 7, 8, 9, or 10. In this world, all the winners are reviews that got one noisy rating, and that one noisy rating happened to be 10. That is, the ones with fewer ratings sometimes end up with an average of 10, and so they always beat the more-rated reviews with lower variance, which always end up with an average of 8.
This is approximately how the real data look - if I just took the end result, I think almost every finalist would be one of the reviews with the fewest ratings.
The solution is to start with a prior of all reviews being average, then let the actual ratings act as evidence to move them away from that prior, so that you need several ratings to appear to be the best review in the pile. That's what the extra dummy ratings are doing - setting the prior.
I misunderstood what your system was — thought you were saying that you padded someone’s real scores with dummy scores that averaged out to the same average score the person already had. But I reread when less tired, and see that what you did was add in dummy votes that averaged out to the group average score — right? So if all the votes for all the reviews taken together averaged out to 5, you gave everybody the equivalent of enough 5’s to bring their total vote to 10.
So I see your reasoning. It’s that if someone’s short on votes, the best guess for what each of the remaining votes would be is 5, or a scattering of numbers that average out to 5. Another way of saying it is that you’re regressing the average scores of people with fewer than 10 votes towards the mean. That’s nowhere near as bad as what I thought you were doing, because of course outliers do tend to regress towards the mean. but it’s still not great. Here are 3 reasons why, which really all come down to the same thing:
-The final scores people get who had 4 or so dummy scores thrown into their pot will be less valid measures of how interesting other people found their reviews. There’s just less actual information in their final number. It’s as though somebody interviewed you about your takes on this and that political candidate, and ran out of time, and so wrote an article about your politics where they assumed your views on the remaining candidates were those typical of California males.
-The few real scores for people who ended up with too few real votes are not great information. There have not been enough votes for various random factors in the mass of votes to cancel each other out. But they are still the best information we have about how their review is faring, and they have some predictive power for what their remaining votes would have been, had they gotten more. You are disregarding that information. Put in terms of the imaginary article about your political views, the equivalent argument is that if the interviewer must guess at your views about candidates you did not discuss, she should guess based on what you have said about other candidates, not on your demographics..
-Assuming 5 is the average score for all reviews over all voters, 5 is usually not the best guess for what scores people without many ratings would have gotten from real raters. It’s only the best guess for those whose average scores before you start padding with dummy scores average out to 5. For people whose real scores average higher or lower than 5, the best bet is that additional real scores (as a group, separate from their first few real scores) would have averaged out to somewhere between 5 and the average of their initial small batch of real scores. Everything tends to go towards the mean, but most don’t go *to* the mean, they just slide some in the direction of the mean. And there’s a distribution of where sliders end up. Most slide a moderate distance towards the mean. A considerably smaller number slide all the way to it, and a considerably smaller number stay where they were and don’t slide.. A very few slide beyond the mean, over to the far side of average. And a few people who were way above or below the mean get even further above above or below it. Overall the most common place for a measure to land is somewhere between the first measure taken and the overall average measure after all measures for everyone are taken.
Perhaps we need more guidance about what rating should be considered "average for the lot", etc. To me, it's obvious that average should be 5 or 6, but some people here seem to love grade inflation (too many professors here?) and said they thought most books should get a 7 or higher. Maybe it should be pointed out that the book reviewers never see the ratings, so nobody is going to hurt anyone's feelings by giving a review a 2 if it deserves it.
~7/10 is the natural mode of a typical gaussian-esque distribution, because the spectrum represents the entire possibility-space, not just the set of the entries which were actually submitted. E.g. a rating of 2/10 should be reserved for people who are practically griefing, barely literate, etc.
(N.B. this isn't mere speculation of others' rationales. this is how i feel *personally* and I find it highly probable that this generalizes to others. I remember reading that OG thread you had in mind, but I didn't speak up because I figured someone else would.)
No, people do see their ratings, at least they did this year. I forget the context, but for some reason Scott put up a link where you could look at all the scores someone got (with some dummy votes mixed in, if the person got fewer than 10 real votes). Of course people don't get info about who their voters were.
that when all books had at least 5 reviews, he added some dummy ratings, I think enough to bring the total ratings for the review up to 9 or 10. The dummy ratings were all whatever the average was of all the ratings given to all the reviews, so I think probably around a 6. I'm not sure he actually gave 6's though. He may have given something that averaged out to 6's, like say 2 5's and 2 7's. If you keep on reading down the thread from that post of Scott's you'll see the next thing is my pointing out the ways the dummy ratings don't really equalize things for people.
I can't really think of a better solution, though. Well, actually I think it's better (though still bad) to just average out the 5 ratings somebody got, if somebody only got 5 ratings, and call that their result. But Scott mentioned that often there are a scattering of reviews with only one or 2 scores, and those scores are a 9 or a 10, and if he just went by the real reviews almost all the finalists would be people who got one or 2 high votes and no more.
Or we could limit the length of reviews, so that more people can read more reviews. We could require anyone who enters to read and rate at least 20 reviews. Once a review got 20 ratings we could remove it from the list, so that it didn't get any more ratings, then all reviews would get the same number of ratings. That seems like reasonable social engineering to me, but Scott doesn't like to make rules, and lotsa people here get very pissy about rules too.
A possible reason we might be book reviewed out: I found it surprisingly painful not to be a finalist when I entered a review last year. What mostly got me wasn't the failure to get an A+, it was realizing that nobody here was ever going to read my review and talk with me about it. Of course I knew in advance that that was the fate of non-finalists, but the reality bothered me a lot more than I expected. Talked with a number of other non-finalists who felt the same. I don't think I'll ever enter another book review, now that I've experienced what it's like not to be a finalist. Don't know what to do about that built-in problem, but I do wonder if it's leading a number of people who would otherwise have entered reviews to hold back from doing it if they got zapped by the Big Silence the year before.
I don't think I participated last year, but I didn't read all the reviews this year. So the silence may have been the luck of the draw as to certain topics prompting more reads? And the slight attachment that may follow the reading of a lengthy review?
I apologize if we were supposed to read them all. I expect we were. But internet time is a luxury in some ways.
If you reveal which review was yours, I expect you'd get some people to return to it.
Trying to pinpoint the nature of "thoughts" (sorry for the possibly inaccurate shorthand) raises for me one memory, or rather a memory of something that as a child I could do at will, and did for a time, around the age of six. It was something that had a certain fascination but also left me disquieted, so that I would tell myself "not to do it". But then like as not, would do so. Typically only when I waked in the night, which I seldom did. I was able to close my eyes and - what I felt was happening - zoom out into space very, very fast. Note that I didn't understand anything about space and barely knew the word. The speed at which this happened - this movement within my mind that was less like movement of a body and more like my brain case expanding, seemingly infinite in extent, but yet I felt it to be "the universe" (again, not a word I could have used) - was disconcerting, vertigo-inducing, but exciting as speed is. But the invariable end result was what upset me more than the sickeningly rapid movement. I found that at the end of this interior sensory experience - well, I described it once to my mother, expecting perhaps that she would explain it away: I felt that I was not related to my family, that they were exactly as any other people, being now so distant, not bad or good but just nothing to me, as was the bed I'd left behind, the room, house ... They might have been the neighbors. This made me cold with fear, and ruined the exhilaration.
Mother didn't care to hear about this, and I'm sorry to burden you with this dull kiddie memory - but I've never encountered anyone else who thinks about what is happening, interior-ly.
I can remember thinking in my early teens that thoughts weren’t really describable, and that the notion of thoughts as a stream of words or images was just a simple cartoon version of what they were. And that reminds me of one of *my* early memories. In primary school the teacher would always put up a spring poster that said something like l “spring has sprung,” and all of us little kids would make construction paper flowers to decorate it. But I always thought about how real flowers didn’t
look like our formulaic flowers, and how when the first warm day of the year came I did not say”spring has sprung” to myself. And that made me feel like I just wasn’t a normal, decent cheerful person.
As to why your piece didn't get the give-and-take you wanted - I hadn't seen it before, or voted before - but perhaps it felt more like an essay than a review? Perhaps that's a contest SA might consider. Also, the Statue of Liberty part was an interesting exercise but I think possibly Americans not from NY or NJ are not quite as familiar with it as it may seem they ought to be.
didn’t vote for it, maybe because it seemed like an essay — yeah, whatever. It’s not that I’m mystified that I wasn’t a finalist, or feel like I the reviewers weren’t fair. What got to me was just suddenly realizing that nobody here except t 6 or so who voted would ever see it. As I said, I knew in advance that would be the case, I just wasn’t prepared for how bereft I felt.
Would have thrown that a like and an approval vote if it'd made the cut. Discussion would likely have been lively and interesting, based on past introspection-themed posts like jhana. Did find the homunculus art disturbing, and noticed internal-skeptic epistemic alarms going off at several of the purported scientific conclusions (I assume the book had citations one could follow up on for the actual studies themselves); I also left with more a sense of Schwitzbagel's views and theories, rather than the contents of the book itself. Which in some sense is fine, a book is mostly just a vehicle for delivering information. But I find it helpful to have a bright dividing line between what's a review of in-book content, versus what's an exploration-gestalt of additional context and outside material. Either way, the finalists would have been enriched by this entry. Guess that's more motivation to actually vote in future pre-selection.
This was my first year entering, and not making the finals also bothered me more than I'd expected, for the same reasons you describe. I did at least get a few people discussing it in the comments after the first-round voting closed, which gave me some good actionable feedback if I want to rework it for other audiences (which I am seriously considering doing) and also was a huge improvement over pure silence, especially since some of the comments were strongly approving.
I entered a review one year and didn't get anywhere, which I sort of expected. Of course, it is a little ego deflation to realise you're not as good as you were maybe patting yourself on the back as being, but that's no harm to prick vanity.
Do try again if you feel you have a book or other piece you really want to review, practice helps! And it really would be a pity if you felt constrained. Even if only six people see it, that's six more than if you never wrote it.
Do you think it would help if Scott (or a reader for Scott) would compile a list of, say, the 30 top-rated reviews that didn't make it into the final? I found your review good, and getting into that list may be a more realistic goal than becoming a finalist.
Making it into the shortlist would probably get you more readers, but would rather not initiate a big discussion. But if you put it on your own blog, then perhaps the shortlist could contain a link to your blog post, and there could be some discussion in the comments there?
(In principle this idea is not limited to a shortlist. But if Scott publishes all reviews together with their rating, then it would be publicly visible when a review only received 3/10 as average rating.)
Thanks to those who responded by actually reading my old book review and/or suggesting ways to soften the the-lady-or-the-tiger setup of the book review contest. I think that in my case the best solution is not to write more book reviews, but instead, in a really egregious failure to generalize the book review lesson to similar situations, publish a serialized novel on Substack. It’s about life at a point in a future that features an artificial superintelligence. I’ll start posting it when I’m far enough along not to worry about keeping up with my promised publishing schedule.
Yeah, the chance of any kind of payoff is just too small
for reviews, esp in proportion to how much time they take. With comments, there’s kind of a sliding scale: the more engagement your post generates, the more response you get. That I am fine with. But
with book reviews it’s either the lady or the tiger.
> I don't think I'll ever enter another book review, now that I've experienced what it's like not to be a finalist.
I read a webpage once that was devoted to Snoopy's writing career. One section focused on the rejection letters he'd receive, saying things like "Dear contributor, we are returning your dumb story. Please don't write to us again."
The page author noted that "in my experience, rejection letters from publishers are scrupulously polite. But Charles Schulz has captured exactly how they feel."
My complaint is that most of them aren't reviews, they are term papers (or even books themselves in a few cases). Even if one is good enough to warrant it, I still find it a bit daunting (let alone the ones that lost me early on). I think it could be interesting to enforce a modest word limit next time around, and it would also give writers a chance to flex a different skill.
Tastes differ, of course; I don't mind length if the review is good and holding my interest. There have been some where I've thought "Cut it back, dude, you've said all you needed to say" but that's because they bored me.
So sometimes the feeling of "This is too long" has more to do with internal boredom threshold than the substance of the review. I don't really want a word limit, but on the other hand I don't want to impose my own preferences, because if other readers do feel "the reviews are too long", then they're not enjoying the experience, and the book review contest should be for fun as its main purpose (we're not the freakin' Booker Prize, after all).
Eh...n of 1 and I'm Not A Representative Sample of the median ACX readership. But I've been getting my book review high this year from reading Zvi's heroic-length ones, plus FdB's occasional dabble in the field (although those tend to review the author as much as the book). So there are still Quality Reviews out there in the rat-adjacent sphere; The Talent hasn't been lost, the NYT hasn't poached and/or doxxed all of our good book reviewers.
Maybe it's just a selection effect thing? After all, the finalists we see are the ones voted for out of the initial set...perhaps there were a bunch of More Of This, Please reviews in there that simply didn't make the cut? I never have enough time to wade through the initial huge batch of submissions, so in that sense, it's my own small fault for being meh on the finalists. Can't complain if you didn't vote! I am wary of suggesting that there be a longer window for winnowing though...logistics are a nontrivial inconvenience, and not keeping things moving at a respectable pace causes significant drop-off of interest and participation. (Or that seems like a reasonable prior?)
Lining silver: p(me submitting a review) goes up when the field looks less competitive. Perhaps other intimidated loquaciousists will also feel so inspired.
"Lining silver: p(me submitting a review) goes up when the field looks less competitive. Perhaps other intimidated loquaciousists will also feel so inspired."
I hoped to do a review this year, but life and work just got too damn busy at the time. Maybe next year! If someone else doesn't review my chosen book before me.
I always mean to rate some of the initial submissions to help select the finalists, but wind up intimidated by the sheer volume of reviews. Do I start from the beginning and privilege reviews at the top of the alphabet? Do I only focus on reviews of books I've read (or haven't read)? It might help to do something gimmicky and simply say "I've dumped the reviews into 30 randomized groups, pick whichever group your birthday falls on and look through those."
I read and rated some (maybe 10 or so?) of the submitted candidates, and found at least 2-3 that I would have preferred make the cut over all or most of the finalists. Looking at the full list, I remember liking the Godel, Escher, Bach one, as well as the (controversial) one on Sadly Porn. I also liked and rated How the war was won, which made the cut. Given that I found several good ones in the pre-phase, I was hopeful that the quality of the finalists would be really good this year, but personally I was somewhat disappointed and found last year to be much better.
So this could of course just be my personal taste not aligning with the rest of the electorate, - but it could also be evidence that the screening process did not select the best reviews for the finalists.
This was my first time reading pre-finalist reviews — the one on Who We Are and How We Got Here stuck in my head much more than any of the finalists did.
I’ll check out the ones you mentioned! I’m curious if there were really hidden gems that nominations missed out on, or if individual preferences just vary more than I expect.
It’s possible the complainers are just louder. I thought this years set were all pretty good, and thought I got something out of every review - even the weaker ones or those I didn’t really like.
It might have just been luck of the draw due to the sheer number of reviews. I was going to vote in the first round this year until I saw how many I'd have to read.
If this type of comment isn't allowed, I apologize -- but I'm the editor of a humanities journal and we would love to publish book reviews by brilliant ACX readers. We can pay a small amount. If interested, feel free to get in touch:
If you're the reviewer, let me congratulate you! I really enjoyed it, and I liked how you included your own translations to show how English versions make it different depending on the translator (I don't know if it is genuinely difficult, but translations always mention how tough it is to turn terza rima into a corresponding English meter because there isn't a directly corresponding form in English verse).
If you do edit it and polish it up, I would urge you to submit it to the magazine. I think it's worth a wider audience. Even if that is only "people in Texas who read literary magazines" 😀
May I ask, is your magazine's name inspired by the London club? Whenever I see "Athenaeum", I automatically think of that (thanks to marinating in 19th century British novels):
So an approval voting question. Not being a voting nerd, it seems to me that if you vote for more than one your vote should be divided between your picks. Pick two each gets 1/2 a vote, three 1/3 of a vote, etc. If all my votes count as one... so if I pick three it's like I get three votes, then the best strategy to get what I want is to approve of (vote for) about half the options. And that seems wrong. Let's say this blog was about 1/2 red and 1/2 blue, and the reviews were also easily divided into blue leaning and red leaning, then voting for all the reviews that leaned your way politically would be the way to reward your side. Or am I missing something?
Let's call the factions "blue" and "green" to avoid thinking about modern politics. Say 3 green politicians and one blue politician are running against each other in a district where 2/3rds of the voters are green. If the greens split their votes across their three candidates as you describe, the blues will win despite being only 1/3rd of the population. (This is called the spoiler effect, where a losing candidate affects the results of an election simply by participating.)
Whereas, under approval voting, the most moderate green candidate is likely to win (being approved of by some of the blue voters). This is a much more reasonable outcome.
Strategic voting does come into play when you decide your cutoff for approval. (Do you only vote for your first choice? Your first two choices? All of the options except your least favorite?)
However, first-past-the-post and instant runoff have worse strategic voting issues imo.
I think there’s no real “should” here abstractly. For some purposes (ones like the case you’re suggesting, with factional voting where each faction gets close to half the things being voted for) your suggestion would be good. In standard electoral contexts, people often have the opposite worry, that too many people will “bullet vote” for just the candidate of their favored party, and not give their “approval” to enough secondary candidates. Your proposal would exacerbate that problem.
Given Arrow’s impossibility theorem (and various extensions of it) there isn’t going to be a single voting system that is best for all sorts of voting. It’s going to depend a lot on the concrete situation and the kind of judgment we want to elicit from people.
Why stop at half? If you pick all of them, you get all the votes! ...Which goes to show that more votes do not necessarily give you a larger influence on the result. Approving of more candidates already dilutes your vote, making it unnecessary to further reduce it.
You're right that you can have more or less influence on the results depending on how many you vote for. It won't necessarily be optimal to approve of half of the candidates, but in some circumstances it might be. One exception would be if you feel much more strongly about your top few choices than about the rest of them, then you would just vote for those ones. Another exception would be if the half of candidates that you like better lines up with the half of candidates that almost everyone else also likes better, then you would be better off voting for only your top quarter of them.
I think it's not great that approval voting incentivizes you to think about these effects instead of purely about which options you like best, but it sure is better about that than when everyone only gets one vote. There you have to coordinate with the half of people who most closely agree with you on which candidate to vote for, or else you'll get the candidate that the half of people farthest from you like better. Overall I would rank the voting systems as:
Ranked Choice Condorcet slightly better than Approval Voting, which is slightly better than Instant Runoff, all of which are immensely better than First Past the Post (the common one).
... And now I took too long so Kenny Easwaran has probably answered this better, but maybe this can still clarify something further.
I was re-reading the Creutzfeldt-Jakob disease review and [before I got to that section] connecting it in my mind with Chronic Wasting Disease of deer and goddamned libertarian bubbas in Texas, and there's basically a similar blame game to play as with the UK government. Idiots who thought vanity "deer breeding" was a good moneymaking idea, started transporting deer around the state, like idiots, and Chronic Wasting Disease, which we didn't have in the wild, has been introduced into the state by bringing in deer from places where they did have it, and even now as far as I know is only present in these stupid "captive" situations (has it spread to people? - officially no, but yet there are anecdotes circulating of CJD among deer hunters) to the point that the state now had to set up check stations in a number of places - controversially! - much against the will of the bubba idiots. This is a terrible violation of their rights to get Chronic Wasted, or at the very least to get deer Chronic Wasted.
ETA: I'm sorry for being a crank, but consider this a PSA in some sort - I would not eat venison in Texas if I had not shot the deer myself after observing it for some time. As I do not hunt, I will not be so eating. Sure, it's probably all fine - but I would never say it's fine because elements of the state government said so. Kudos to our parks and wildlife agency for pursuing this against the bitching of the deer "breeders".
Also, it should be noted that those same people raise hell when the subject of protections for mountain lions comes up. (The last TPWD effort in this way resulted in this radical recommendation - people should occasionally have to run their traps so that lions are not left to starve inhumanely; this was fiercely fought.] The eradication of which lions, in many areas, led to deer overpopulation, and a general feeling that "they weren't making deer like they used to" in terms of handsomeness. Which then led to this ridiculous deer-breeding contortion (among the stupid - normal people know this is stupid - you hunt what nature provides for you to hunt, and not inside a game fence).
My high-school friend's mother suddenly got some sort of spongiform encephalopathy out of nowhere. No one could figure out how or why; no apparent genetic reason, wasn't Mad Cow...
...but she ate a lot of venison her husband shot. They did the butchering and so forth themselves, and apparently there was some suspicion over an ill deer that she and only she "accidentally" got fed by his weird-ass father / her weird-ass husband, who seemed to resent her (& also did, it was said by my buddy and on the grapevine at our small Christian school, even before she began losing her mind).
Supposedly, the Chronic Wasting Disease in deer cannot be transferred to people—but it sure seemed suggestive. Friend's dad / her husband was real weird, real unpleasant; she seemed nice, although after she was driving another mutual friend and I home and went up the down ramp onto the highway, we uh... we didn't drive with her again (and indeed she got the "privilege" taken away for her own good shortly after).
As far as I know, the doctors never did come up with any alternative except "spontaneous" development of a prion disease, but I'm not 100% sure on that.
This was in West Texas, FYI, around 2003-2007 (not sure exactly when it was found out & when she died, but it was something like that IIRC—progression was slower than in BSE, I believe, but still fatal in the end).
One thing I found strange was that so many poems made it into the finalists. I'm personally not a fan of poetry so I just immediately skipped over those.
Is 2 out of 14 really that many? I was surprised there wasn't more, given that Scott had announced he would do affirmative action for poetry and other "non-traditional" review subjects.
FWIW you could count the Poet's Craft Book as a third, being about poetry in general. Whether it's 14% or 20%, given the infinite number of non-traditional subjects to review, it's still overrepresented.
Next time around could you require people to submit short hooks/summaries of their reviews? I would have like to have read more but it's just hard to pick them from the title alone. Alternatively, you could use AI to generate summaries.
No don't do this, it would spoil the reviews that build up to the main part/thesis without immediately giving it away, some of them even do it in ways that subvert the readers expectations. LLMs are available to everyone, if you want an AI summary go do it yourself.
Sorry, I wasn't referring to the finalists. I meant the beginning stage of the contest. I think it would just make it easier to find more "hidden gems" than by just looking at the title.
Oh, yea makes sense. Though would still suggest an option to hide the summaries or something for those who don't want to be spoiled in the initial stage either.
One of my big takeaways from the limited feedback I got on my non-finalist review was that I needed to explicitly explain at the beginning of the review why the book, which was very old and on an esoteric-appearing subject, is still important and relevant to people who aren't history and/or this-particular-topic nerds. I did discuss this, but I saved most of it for the conclusion when it was probably too late to do much good.
The first sentence is a bold move. You had me at "five hundred year old theological debate" but there's a reason I don't often discuss such things with other people. There's not a lot of us in this part of the pool. Ah well, like Schopenhauer said somewhere about all his unsold editions: "my works are like a mirror, if an ass looks in, you can't expect an angel to look out."
I really enjoyed the review and hope you publish it somewhere. Just to offer some feedback though:
Call me old-fashioned, but since you initially used Erasmus's Latin title, De Libero Arbitrio, I think you should point out Luther's reply is titled De Servo Arbitrio.
More substantively, I see you already pondered moving your conclusions more to the front. That might well be worthwhile. "The Background" is well done, but it's probably nice to inform a more casual reader as to why these people are worth reading about before launching into detailed exposition.
You should expound more on your interesting insight that today's debates are "rhyming" with the Reformation. I would like to read more about that.
An aside: Luther says: "The human will is like a beast of burden. If God mounts it, it wishes and goes as God wills; if Satan mounts it, it wishes and goes as Satan wills. Nor can it choose the rider it would prefer, or betake itself to him, but it is the riders who contend for its possession." (De Servo Arbitrio, pars i. sec. 24.) I've long maintained that quote is the source of Nietzsche's more famous maxim: “Can an ass be tragic? To perish under a burden one can neither bear nor throw off? The case of the philosopher.” (Twilight of the Idols).
Finally: I was under the impression that one of Erasmus's major concerns in Libero was with the doctrine of exclusive salvation, i.e., it sends the immense majority of mankind to hell entirely irrespective of their will, e.g., those dying in infancy, those who lived and died in heathen lands, etc. to which Luther replied in Servo, "This is the acme of faith, to believe that He is merciful who saves so few and who condemns so many; that He is just who at His own pleasure has made us necessarily doomed to damnation; so that, as Erasmus says, He seems to delight in the tortures of the wretched, and to be more deserving of hatred than of love. If by any effort of reason I could conceive how God could be merciful and just who shows so much anger and iniquity, there would be no need for faith.’ (pars i. sec. 23).
To me, these arguments are some of the most dramatic and may well grab the attention of a more casual reader, so you might put some of that in a revised version. Hellfire gets eyeballs?
Anyway, not that's it not a great review as is, I just thought I would offer some comments. Thanks for writing the review!
I've enjoyed the various book reviews throughout the years, but through a combination of laziness and "dammit I can't decide", never really ending up voting. So this year, with approval voting, and the fact that some reviews were so bad I wanted to spit iron nails, I got off my ass and voted for the 6 best of this crop. Thank you Scott, and ACX for this years book review contest.
I was quite surprised to see #7 leading the Manifold poll this morning, but also, #7 was the only one I felt compelled to tell someone else about, because it was so uniquely and memorably unsettling to read, so I guess that says something.
To satisfy the third half of voting system nerds, can we combine approval voting with rank choice voting next year? Find the Smith group of the set with ranked choice voting, then select the winner as the review with the highest approval amongst that group. I'd be glad to write the software to run the election.
If you're thinking of the review that was featured on this site, that was Scott's. I believe there was a review entered in the contest, but it didn't make the finals.
It was interesting, a fun bit of erudite razzle, though full of the kind of unearned presumption some people mistake for effective swagger. But – without spoiling its nature – that a jig was on became apparent fairly early, and once the jig was up, the result was confirmed disappointment rather than grudging admiration.
Have you actually read the book? Wouldn't blame you if you hadn't, of course. I thought Scott's review was a good read, and an honest grappling bout with the source material.
As to TLP himself... I think he had something worthwhile (and ideologically neutral, despite his self-id as a solid Republican) going on with his critique of precious individualism and the paralysis that comes with scorched-earth ego defence. But also plenty of unfortunate tics – Athens is not quite the killer allegory, advertising isn't quite the killer zeitgeist barometer, etc. I found him fascinating, regardless.
I assumed the winner here would be obvious, but then I looked at the prediction market, which was completely different from my picks. At the time I checked, the top two on Manifold were both ones that I was disappointed by and didn't vote for.
This was one of my favorite things I’ve come across on Substack (and how I found this one), incredible job to everyone. There wasn’t a single one I wasn’t fascinated by, but the Real Raw News review was one of the best pieces of writing I think I ever came across. I was among the few that knew of it beforehand — it’s a huge inside joke at my workplace, and a small circle of my colleagues will constantly slack the newest stories to each other. It always just seemed like a silly throwaway thing to me, that review made me actually think about it for the first time. I’ve been raving about that review to all my friends ever since.
The other biggest one I tell people about is Two Arms and a Head, one of the most horrifying things I’ve come across.
Anyways, thank you greatly for this. I’m new here, so I’m glad that it sounds like you did something similar last year, to which I will search. Please get this going again as soon as possible! I may throw my hat in this time
It was my first time too, so great to read all the reviews even if we're not likely to read the books, and I really don't want people to think readers are over book reviews! I think it's wonderful and maybe I'll have a go in future too.
I feel that many of the reviews were far too massive. A review should not be a book with comments on a book - perhaps next time it would make sense to establish some guidelines on maximum length?
Maybe I ought to vote for #7, Two Arms and a Head, for being, to me, a genuine infohazard. I read the review, large portions of the book, a blog linked in the comments by a man dying from cancer, and the rest of the comments all in between my doctor gravely referring me to a dermatologist and actually seeing the dermatologist. I don't know what happened, but I became basically convinced that I too had only a couple of months to live. I had the most bizarre and stressful week of my life, up to and including desperately begging God to spare my life...which I guess he did, because there was nothing actually wrong with me. I'm a really levelheaded person and have a happy life and good health. What on earth happened? Anyway I would vote for it, but I can't look at or think about it ever again, so Real Raw News it is.
Aaaah I'm so sorry my review made you spiral. From the bottom of my heart, I hope you've been feeling better lately. I've also had my fair share of health scares, and it's a special level of scary that I don't wish on anyone. [Hugs. All the hugs.]
Real Raw News was pretty awesome. I voted for it, too. Good choice.
There were a lot of 'experimental' reviews this year. I'll be voting Real Raw News just because that was hilarious.
Is there a google doc with all the submitted reviews available?
RRN was good (though I do feel mildly guilty about voting for something this culture war adjacent).
Probably my favorite too, the "sic" footnote was what tipped the scale
Linked in the original post, distributed over six files.
https://www.astralcodexten.com/p/choose-book-review-finalists-2024
I was slightly disappointed the review of the Divine Comedy didn't make it to the final, but I was very pleasantly surprised that the Ballad of the White Horse did, so on balance I'm happy!
Though put me in the small pile agreeing that the favourites over on the prediction markets are not books I voted for. Well, such is democracy!
Anyone else not have a clear favourite this year? I though most were pretty good but none really stood out.
When I saw this post my first thought was "Oh, already? But I don't have a favourite yet!"
Approval voting is handy given that I have a bunch I'd be happy to see win but no single pick for forst place.
I think my favourite still ended up being the first one (Yukichi Fukuzawa bio)
That was so clearly incomplete that I'm astounded at how high it's rated
I feel like this year had more good reviews but no really great reviews
so many were like good but with one major element being garbo and I can't approve of that. I guess it's a process of elimination
Yeah, like the Yukichi Fukuzawa one would have been great except that it was ruined by an ill-conceived political rant at the end.
that one I found tolerable, I'd call it a minor element
I don't remember the rant, but I do remember thinking while reading the review "You still haven't told me why I should care to learn about this guy". Okay, that's the job of the book, but the review was just - guy has ups and downs in Japanese politics/civil service, hops aboard the Westernisation train as a means of job security, ends up pro-modernisation.
And so? was my reaction. The review didn't make me interested in the man or the situation, and I mean, the Meiji period is *fascinating* in how it set the course for Japan that ended up with them deciding the Second World War was an amazing concept they wanted in on. but the review actually *bored* me, because I didn't care about the guy or what he was doing.
If we had votes for "this review made me want *not* to read the book", then that would have been my Number One.
I found 11, the Spirit of Rationalism one, exceptionally bad.
That one had the problem that the reviewer was a little too uncritically receptive of a 19th century work, but the major problem was the example they gave about the Swiss village undergoing a possession craze.
Of course we nitpickers jumped all over that, and the true facts were vastly different to the glib, 19th century Enlightenment/Protestant rhetoric of "dumb backwards peasants and their superstitions", so that really blacked their eye for them in terms of 'how trustworthy is this review?' because the facts contradicted the scenario as they presented it (the kindly big city doctor bringing the light of Science to dissipate the darkness of Superstition that the benighted were wallowing in and treating the mentally ill with kindness and niceness was not that kindly, and had no problem using police and soldiers to lock up people in the local hospitals until they were 'cured', for instance).
If they had it to do again, I would suggest leaving out the Swiss exemplar which in fact was not an example of the point they were trying to make, and concentrate on the book and the author with a bit more background on "why did the Victorians go all-in on Science! and Progress! as their guiding stars?"
Fortunately this year I thought ahead and put a like on the ones I knew I'd probably vote for, so I didn't actually have to remember anything when voting time came.
I hadn't read the preamble and was super happy when I realized that it is approval voting. I found it very hard to decide between my four favorites, and had already considered flipping a coin between them.
Not sure how to feel about Approval Voting making picking winners less stressful coinciding with pretty thin pickings. There's only three reviews I approved of enough to vote for, as "real book reviews", but none were great...and they wouldn't have made even that cut if included in prior years' contests, for me. More excited this year to see the reveal of which one(s) was(n't) written by Scott than about the actual winner.
Many people have said they were disappointed in this year's. I don't have a good idea how to solve this (other than maybe cancelling the "affirmative action" policy for weirder reviews, but I don't think would go far). Maybe the community is book-review-ed out?
To be fair, if it's a book about linguistics, Chomsky is the 800 lb gorilla in the field and you can't go far wrong assuming he'll be in the book, if not the subject of the book.
I do get what you're saying, but on the other hand, would you really not read a book if it was published in 1910? Even just seeing how views on history/science/ethics/best system of governance/poets whose names will be writ in bronze change over the decades is valuable knowledge, to realise how we get from A to B while claiming (or disclaiming) that Thing Y has always been this way and that "all right-thinking people agree that Z is how it should be done".
I definitely don’t feel book-reviewed-out, I found the exposure to the reviewed works valuable and ended up reading three. I don’t know if a heuristic can be applied to selecting finalists to improve the process; in fact, applying a heuristic may reduce the utility of the process compared to going on pure gut instinct about which reviews are the best.
I think by “review-ed out” he might mean that all the people who are going to write good reviews may have already written one, and may not want to write another of this sort for a while.
Yeah, my interpretation of recent review quality is all the true multi-metric "A" material has been run through, and until more is generated by the few capable of doing it, we're in a realm of higher variance strategies that really ping the meter for small niches of people here and there, but leave most unfulfilled.
I didn't get that sense. To me the general standard was higher, even if the best reviews didn't reach the heights of previous winners. I solidly approve of at least a third of the finalists, even if none is a Njal's Saga review.
I think a narrower field of finalists might help, maybe the top 7 instead of 14?
Agreed
I disagree: with a wider field you're more likely to include something really great. Also, I like reading reviews.
Ooooh Scott you should include this as one of the questions in the next ACX survey
I'm not going to do this for purely selfish reasons - having fourteen weeks where there's a guaranteed "free" blog post that I don't have to personally write gives me a bit of a summer vacation.
Could you be persuaded to include MORE? Maybe do two a week over the same period.
I thought they were all good, such that my confidence that "if I allocate time out of my busy life to read the next book review I shall not be disappointed" never waned.
But I wonder about the selection process. Were there enough votes for each to properly identify the cream of the crop? Especially since people (can) self-select which reviews they sample?
I tried to combat this bias by sampling from the pool of reviews randomly. Unfortunately, the third one that came up for me was a super-long review of Sadly, Porn. I felt morally obligated to finish it, having committed to the process, and it was excellent, but given the complaints last year about review length, I couldn't bring myself to upvote it and inflict it on everyone else.
All of which is to suggest, would it make sense to feed people random reviews during the screening process?
I think there may be a point where enough people have reviewed the books they're truly passionate about (which will often be the best reviews) that the pickings grow slim. Maybe switching to a biennial (with some other contest on the off years?) would improve things.
I don't think the quality of the writing dropped off all that much. I think most people are disappointed because none of the books/topics really struck a chord that led to wider discussions in the same way that reviews like the one on Georgism did in the past. That one really set off a revival of discourse on that topic to where basically everyone in the ratsphere knows what you're talking about when you mention land-value based tax policy. For whatever reason none of this years book reviews sparked that kind of viral community intellectual discourse. Nothing connected deeply with the rationalist egregore so to say. Likely because none of the topics sparked a ton of new debate or insights within the community. For instance Real Raw News was interesting and shed light on an area of our world not everyone knew about, but it at best shifted some perspectives, I don't think it was revelatory in the same way as past winners. Or Two Arms and a Head gave some brutal new insights into paraplegics, but I'm guessing the majority of the community is already pro-euthanasia so it at best solidified most opinions instead of inspiring much debate or introducing anything super revelatory.
I sort of wish the Ballad of the White Horse struck more of a chord because I personally found it rather enlightening in certain respects, but I can understand how an epic poem that generally lays out the case in favor of virtue ethics and conservatism didn't go over that well with this crowd.
I had never heard the word “egregore” but was surprised to see that the concept is frequent enough that Google didn’t try to autocorrect it to “egregious” (the way my phone did when writing this comment). I forget if the word “zeitgeist” originally comes from Hegel, but it seems to be a similar concept to that.
I agree. It's not so much that the quality of the writing is lower this year - but I found the themes and discussions more interesting last year. Several of the "reviews" were just excuses to talk about something the author really wanted to discuss, which I was fine with. For example the one on nuclear safety, I really liked that one as well.
There's probably room to improve the first round screening process, but any suggestions I'd give there are highly speculative because the data that would inform an improvement process is (correctly) private.
One concern that I've seen other people bring up is the possibility that luck of the draw in terms of first-round readers may have an oversized effect. I.e. if a review only gets one score or only a handful of scores, then one or two of those readers being an unusually tough grader is going to have an outsized effect on the average score. Or one or two readers being an unusually generous grader. Or, particularly for weirder or otherwise controversial reviews, where there's likely to be a strongly bimodal distribution of how much people like it, that review might be a top finalist or left deep in the slush pile depending on which side of the distribution were the people who happened to rate it.
Of course, if every review is getting dozens of scores, I'm sure it averages out. But if the number is small enough for this to be a potential issue, there are a few potential mitigations I've seen proposed.
The low-hanging fruit for the tough-grader problem is probably including a scoring rubric or other heuristic on the form for how to translate subjective quality into a number; something like guidance on scores of the lowest-rated finalists, average finalists, and winners in last year's contest would be helpful. A more advanced solution, which would only work if enough first-round readers are scoring multiple reviews, would be to have some sort of stack rating, either on the form or by having a script post-process the form data to find scores submitted by the same person. If you have enough data for "Reader X liked Review A better than Review B", then you can programmatically generate ELO ratings or something similar base on it. I expect I could help with the scripting if you're interested in doing this.
For divisive reviews, the low-hanging fruit is probably to use some kind of statistical signal for high variance in scores (standard deviation, difference between arithmetic and geometric mean, etc) as an additional output of the first round scores. Reviews with signs of being controversial could either have their average ratings adjusted down or be subject to a bit more scrutiny during your subjective review step.
Another thing that might help both with divisiveness and number of reviews in general would be to have explicit "did not finish" response on the form, with guidance to use it if you stopped because you got bored. This should help give readers permission to bail out early and move on to another review if they start reading a long review they find tedious. Also should help with divisive reviews that get a higher average rating than they otherwise would have because people who them are more likely to stop reading and not rate it.
>For divisive reviews, the low-hanging fruit is probably to use some kind of statistical signal for high variance in scores (standard deviation, difference between arithmetic and geometric mean, etc) as an additional output of the first round scores.
I like this idea: reviews that inspire very low and very high scores may be better finalist options than their averaged out score would imply. A "sort by controversial" option might get us some good reviews!
What I think would really solve the problem is if there was a tool where you hit a button and it gives you the name of the review that has had the least number of votes so far. Just having that available would mean a lot fewer reviews that have only a few votes. However, I don't know if that would be technically feasible. Even a tool that just showed the number of votes (not the scores, just the number of people who have voted) for each review would even things out a bit.
>reviews that inspire very low and very high scores may be better finalist options than their averaged out score would imply
I was actually thinking the reverse. I noticed that with this year's finalists, there were several where the comments had a strong mix of people who absolutely loved them and people who absolutely hated them. And the people who hated any given review were often also complaining about the overall quality of the finalists.
I'm not altogether opposed to divisive reviews making the finals if the average reaction is strong enough, but I suspect there may be a problem with divisive reviews having their scores more influenced by luck of who scores them in the first round (causing both Type I and Type II errors), and also with their scores being artificially inflated by first-round readers who DNF them without leaving a score.
IIRC I waited until every book had at least ~5 ratings, then added ~3-4 "dud" ratings that were at exactly average to all of them, so that books that lucked into getting a few great ratings didn't beat more-rated books.
I'd expect that to go quite a way towards mitigating small sample size issues. I appreciate the cleverness and simplicity.
Scott, that doesn't work. Let's say one book had 5 real ratings, and another got 10. Five real ratings, like 5 coin tosses, are a smaller, less representative sample than 10 coin tosses. By adding in 5 more dummy ratings you don't get another bite of the real variance in the data. You just keep the same average result, but it looks more valid if people see that it's based on 10 results.
As it happens, my review last year was one of the ones that had had so few reviews by midway through the voting that it was on a list you put out of reviews without enough ratings, requesting that people read some more from that list. Now I wonder whether your request brought its total number of reviews up to average. Did I really got as many real votes as are showing on the page of results?
I don't know what to do about the problem of sparsely rated reviews, but your hack doesn't work. Imagine a simplified and exaggerated example with coin tosses. You want to know whether a coin is balanced and fair, so you toss it twice, and it so happens you get 2 heads. But you know that coin didn't get a fair chance to demonstrate the real distribution of its results, so you add in dummy results that have the same average as the real result. So you add in 98 more heads, just to be generous. Now you have a coin the turned up heads on a hundred tosses (98 of which were dummy results). If a real coin was heads 100 times we'd be damn sure it was not evenly balanced and fair. But we can't say the same about the one with 2 real trials and 98 dummies that have the same average result as the 2 trials!
I think it does work. You're right that it doesn't add signal, but it does compensate for variance, which is what I want.
Imagine a hundred reviews, all of which have true goodness level 8, but where there's a lot of noise. Some have 100 ratings - since the noise evens out, they all get score 8. Others have 1 rating each. Since there's noise, some of these get rated 6, 7, 8, 9, or 10. In this world, all the winners are reviews that got one noisy rating, and that one noisy rating happened to be 10. That is, the ones with fewer ratings sometimes end up with an average of 10, and so they always beat the more-rated reviews with lower variance, which always end up with an average of 8.
This is approximately how the real data look - if I just took the end result, I think almost every finalist would be one of the reviews with the fewest ratings.
The solution is to start with a prior of all reviews being average, then let the actual ratings act as evidence to move them away from that prior, so that you need several ratings to appear to be the best review in the pile. That's what the extra dummy ratings are doing - setting the prior.
I misunderstood what your system was — thought you were saying that you padded someone’s real scores with dummy scores that averaged out to the same average score the person already had. But I reread when less tired, and see that what you did was add in dummy votes that averaged out to the group average score — right? So if all the votes for all the reviews taken together averaged out to 5, you gave everybody the equivalent of enough 5’s to bring their total vote to 10.
So I see your reasoning. It’s that if someone’s short on votes, the best guess for what each of the remaining votes would be is 5, or a scattering of numbers that average out to 5. Another way of saying it is that you’re regressing the average scores of people with fewer than 10 votes towards the mean. That’s nowhere near as bad as what I thought you were doing, because of course outliers do tend to regress towards the mean. but it’s still not great. Here are 3 reasons why, which really all come down to the same thing:
-The final scores people get who had 4 or so dummy scores thrown into their pot will be less valid measures of how interesting other people found their reviews. There’s just less actual information in their final number. It’s as though somebody interviewed you about your takes on this and that political candidate, and ran out of time, and so wrote an article about your politics where they assumed your views on the remaining candidates were those typical of California males.
-The few real scores for people who ended up with too few real votes are not great information. There have not been enough votes for various random factors in the mass of votes to cancel each other out. But they are still the best information we have about how their review is faring, and they have some predictive power for what their remaining votes would have been, had they gotten more. You are disregarding that information. Put in terms of the imaginary article about your political views, the equivalent argument is that if the interviewer must guess at your views about candidates you did not discuss, she should guess based on what you have said about other candidates, not on your demographics..
-Assuming 5 is the average score for all reviews over all voters, 5 is usually not the best guess for what scores people without many ratings would have gotten from real raters. It’s only the best guess for those whose average scores before you start padding with dummy scores average out to 5. For people whose real scores average higher or lower than 5, the best bet is that additional real scores (as a group, separate from their first few real scores) would have averaged out to somewhere between 5 and the average of their initial small batch of real scores. Everything tends to go towards the mean, but most don’t go *to* the mean, they just slide some in the direction of the mean. And there’s a distribution of where sliders end up. Most slide a moderate distance towards the mean. A considerably smaller number slide all the way to it, and a considerably smaller number stay where they were and don’t slide.. A very few slide beyond the mean, over to the far side of average. And a few people who were way above or below the mean get even further above above or below it. Overall the most common place for a measure to land is somewhere between the first measure taken and the overall average measure after all measures for everyone are taken.
Perhaps we need more guidance about what rating should be considered "average for the lot", etc. To me, it's obvious that average should be 5 or 6, but some people here seem to love grade inflation (too many professors here?) and said they thought most books should get a 7 or higher. Maybe it should be pointed out that the book reviewers never see the ratings, so nobody is going to hurt anyone's feelings by giving a review a 2 if it deserves it.
~7/10 is the natural mode of a typical gaussian-esque distribution, because the spectrum represents the entire possibility-space, not just the set of the entries which were actually submitted. E.g. a rating of 2/10 should be reserved for people who are practically griefing, barely literate, etc.
(N.B. this isn't mere speculation of others' rationales. this is how i feel *personally* and I find it highly probable that this generalizes to others. I remember reading that OG thread you had in mind, but I didn't speak up because I figured someone else would.)
No, people do see their ratings, at least they did this year. I forget the context, but for some reason Scott put up a link where you could look at all the scores someone got (with some dummy votes mixed in, if the person got fewer than 10 real votes). Of course people don't get info about who their voters were.
I wish I knew where that was. I want to see the ratings of my review!
Ask on here, somebody will know. It was some time in the last month or 2, in a discussion of the book reviews. But you won't know exactly what you got. Scott explains earlier on the present thread (https://open.substack.com/pub/astralcodexten/p/vote-in-the-2024-book-review-contest?r=3d8y5&utm_campaign=comment-list-share-cta&utm_medium=web&comments=true&commentId=70606466)
that when all books had at least 5 reviews, he added some dummy ratings, I think enough to bring the total ratings for the review up to 9 or 10. The dummy ratings were all whatever the average was of all the ratings given to all the reviews, so I think probably around a 6. I'm not sure he actually gave 6's though. He may have given something that averaged out to 6's, like say 2 5's and 2 7's. If you keep on reading down the thread from that post of Scott's you'll see the next thing is my pointing out the ways the dummy ratings don't really equalize things for people.
I can't really think of a better solution, though. Well, actually I think it's better (though still bad) to just average out the 5 ratings somebody got, if somebody only got 5 ratings, and call that their result. But Scott mentioned that often there are a scattering of reviews with only one or 2 scores, and those scores are a 9 or a 10, and if he just went by the real reviews almost all the finalists would be people who got one or 2 high votes and no more.
Or we could limit the length of reviews, so that more people can read more reviews. We could require anyone who enters to read and rate at least 20 reviews. Once a review got 20 ratings we could remove it from the list, so that it didn't get any more ratings, then all reviews would get the same number of ratings. That seems like reasonable social engineering to me, but Scott doesn't like to make rules, and lotsa people here get very pissy about rules too.
A possible reason we might be book reviewed out: I found it surprisingly painful not to be a finalist when I entered a review last year. What mostly got me wasn't the failure to get an A+, it was realizing that nobody here was ever going to read my review and talk with me about it. Of course I knew in advance that that was the fate of non-finalists, but the reality bothered me a lot more than I expected. Talked with a number of other non-finalists who felt the same. I don't think I'll ever enter another book review, now that I've experienced what it's like not to be a finalist. Don't know what to do about that built-in problem, but I do wonder if it's leading a number of people who would otherwise have entered reviews to hold back from doing it if they got zapped by the Big Silence the year before.
I don't think I participated last year, but I didn't read all the reviews this year. So the silence may have been the luck of the draw as to certain topics prompting more reads? And the slight attachment that may follow the reading of a lengthy review?
I apologize if we were supposed to read them all. I expect we were. But internet time is a luxury in some ways.
If you reveal which review was yours, I expect you'd get some people to return to it.
What did you review last year?
Perplexities of Consciousness, by Eric Schwitzgebel. The thing's still up: https://substack.com/@bookreviewgroup/p-128035655
Trying to pinpoint the nature of "thoughts" (sorry for the possibly inaccurate shorthand) raises for me one memory, or rather a memory of something that as a child I could do at will, and did for a time, around the age of six. It was something that had a certain fascination but also left me disquieted, so that I would tell myself "not to do it". But then like as not, would do so. Typically only when I waked in the night, which I seldom did. I was able to close my eyes and - what I felt was happening - zoom out into space very, very fast. Note that I didn't understand anything about space and barely knew the word. The speed at which this happened - this movement within my mind that was less like movement of a body and more like my brain case expanding, seemingly infinite in extent, but yet I felt it to be "the universe" (again, not a word I could have used) - was disconcerting, vertigo-inducing, but exciting as speed is. But the invariable end result was what upset me more than the sickeningly rapid movement. I found that at the end of this interior sensory experience - well, I described it once to my mother, expecting perhaps that she would explain it away: I felt that I was not related to my family, that they were exactly as any other people, being now so distant, not bad or good but just nothing to me, as was the bed I'd left behind, the room, house ... They might have been the neighbors. This made me cold with fear, and ruined the exhilaration.
Mother didn't care to hear about this, and I'm sorry to burden you with this dull kiddie memory - but I've never encountered anyone else who thinks about what is happening, interior-ly.
Ooh yes there are lots
of us.
I can remember thinking in my early teens that thoughts weren’t really describable, and that the notion of thoughts as a stream of words or images was just a simple cartoon version of what they were. And that reminds me of one of *my* early memories. In primary school the teacher would always put up a spring poster that said something like l “spring has sprung,” and all of us little kids would make construction paper flowers to decorate it. But I always thought about how real flowers didn’t
look like our formulaic flowers, and how when the first warm day of the year came I did not say”spring has sprung” to myself. And that made me feel like I just wasn’t a normal, decent cheerful person.
As to why your piece didn't get the give-and-take you wanted - I hadn't seen it before, or voted before - but perhaps it felt more like an essay than a review? Perhaps that's a contest SA might consider. Also, the Statue of Liberty part was an interesting exercise but I think possibly Americans not from NY or NJ are not quite as familiar with it as it may seem they ought to be.
As for why more people
didn’t vote for it, maybe because it seemed like an essay — yeah, whatever. It’s not that I’m mystified that I wasn’t a finalist, or feel like I the reviewers weren’t fair. What got to me was just suddenly realizing that nobody here except t 6 or so who voted would ever see it. As I said, I knew in advance that would be the case, I just wasn’t prepared for how bereft I felt.
Would have thrown that a like and an approval vote if it'd made the cut. Discussion would likely have been lively and interesting, based on past introspection-themed posts like jhana. Did find the homunculus art disturbing, and noticed internal-skeptic epistemic alarms going off at several of the purported scientific conclusions (I assume the book had citations one could follow up on for the actual studies themselves); I also left with more a sense of Schwitzbagel's views and theories, rather than the contents of the book itself. Which in some sense is fine, a book is mostly just a vehicle for delivering information. But I find it helpful to have a bright dividing line between what's a review of in-book content, versus what's an exploration-gestalt of additional context and outside material. Either way, the finalists would have been enriched by this entry. Guess that's more motivation to actually vote in future pre-selection.
This was my first year entering, and not making the finals also bothered me more than I'd expected, for the same reasons you describe. I did at least get a few people discussing it in the comments after the first-round voting closed, which gave me some good actionable feedback if I want to rework it for other audiences (which I am seriously considering doing) and also was a huge improvement over pure silence, especially since some of the comments were strongly approving.
I probably will enter again next year.
I thought it was fun, and it felt like a buffet to a reader - and all the pieces contributed to that FWIW.
I entered a review one year and didn't get anywhere, which I sort of expected. Of course, it is a little ego deflation to realise you're not as good as you were maybe patting yourself on the back as being, but that's no harm to prick vanity.
Do try again if you feel you have a book or other piece you really want to review, practice helps! And it really would be a pity if you felt constrained. Even if only six people see it, that's six more than if you never wrote it.
Do you think it would help if Scott (or a reader for Scott) would compile a list of, say, the 30 top-rated reviews that didn't make it into the final? I found your review good, and getting into that list may be a more realistic goal than becoming a finalist.
Making it into the shortlist would probably get you more readers, but would rather not initiate a big discussion. But if you put it on your own blog, then perhaps the shortlist could contain a link to your blog post, and there could be some discussion in the comments there?
(In principle this idea is not limited to a shortlist. But if Scott publishes all reviews together with their rating, then it would be publicly visible when a review only received 3/10 as average rating.)
Thanks to those who responded by actually reading my old book review and/or suggesting ways to soften the the-lady-or-the-tiger setup of the book review contest. I think that in my case the best solution is not to write more book reviews, but instead, in a really egregious failure to generalize the book review lesson to similar situations, publish a serialized novel on Substack. It’s about life at a point in a future that features an artificial superintelligence. I’ll start posting it when I’m far enough along not to worry about keeping up with my promised publishing schedule.
honestly, same. It felt like a lot of work to write a long essay about a difficult book and then have it just... vanish into the aether.
Yeah, the chance of any kind of payoff is just too small
for reviews, esp in proportion to how much time they take. With comments, there’s kind of a sliding scale: the more engagement your post generates, the more response you get. That I am fine with. But
with book reviews it’s either the lady or the tiger.
> I don't think I'll ever enter another book review, now that I've experienced what it's like not to be a finalist.
I read a webpage once that was devoted to Snoopy's writing career. One section focused on the rejection letters he'd receive, saying things like "Dear contributor, we are returning your dumb story. Please don't write to us again."
The page author noted that "in my experience, rejection letters from publishers are scrupulously polite. But Charles Schulz has captured exactly how they feel."
For what it's worth I didn't find this year's finalists noticeably worse than last year's overall.
My complaint is that most of them aren't reviews, they are term papers (or even books themselves in a few cases). Even if one is good enough to warrant it, I still find it a bit daunting (let alone the ones that lost me early on). I think it could be interesting to enforce a modest word limit next time around, and it would also give writers a chance to flex a different skill.
Tastes differ, of course; I don't mind length if the review is good and holding my interest. There have been some where I've thought "Cut it back, dude, you've said all you needed to say" but that's because they bored me.
So sometimes the feeling of "This is too long" has more to do with internal boredom threshold than the substance of the review. I don't really want a word limit, but on the other hand I don't want to impose my own preferences, because if other readers do feel "the reviews are too long", then they're not enjoying the experience, and the book review contest should be for fun as its main purpose (we're not the freakin' Booker Prize, after all).
Eh...n of 1 and I'm Not A Representative Sample of the median ACX readership. But I've been getting my book review high this year from reading Zvi's heroic-length ones, plus FdB's occasional dabble in the field (although those tend to review the author as much as the book). So there are still Quality Reviews out there in the rat-adjacent sphere; The Talent hasn't been lost, the NYT hasn't poached and/or doxxed all of our good book reviewers.
Maybe it's just a selection effect thing? After all, the finalists we see are the ones voted for out of the initial set...perhaps there were a bunch of More Of This, Please reviews in there that simply didn't make the cut? I never have enough time to wade through the initial huge batch of submissions, so in that sense, it's my own small fault for being meh on the finalists. Can't complain if you didn't vote! I am wary of suggesting that there be a longer window for winnowing though...logistics are a nontrivial inconvenience, and not keeping things moving at a respectable pace causes significant drop-off of interest and participation. (Or that seems like a reasonable prior?)
Lining silver: p(me submitting a review) goes up when the field looks less competitive. Perhaps other intimidated loquaciousists will also feel so inspired.
"Lining silver: p(me submitting a review) goes up when the field looks less competitive. Perhaps other intimidated loquaciousists will also feel so inspired."
I hoped to do a review this year, but life and work just got too damn busy at the time. Maybe next year! If someone else doesn't review my chosen book before me.
You may consider fewer finalists and even a word count limit.
I always mean to rate some of the initial submissions to help select the finalists, but wind up intimidated by the sheer volume of reviews. Do I start from the beginning and privilege reviews at the top of the alphabet? Do I only focus on reviews of books I've read (or haven't read)? It might help to do something gimmicky and simply say "I've dumped the reviews into 30 randomized groups, pick whichever group your birthday falls on and look through those."
I read and rated some (maybe 10 or so?) of the submitted candidates, and found at least 2-3 that I would have preferred make the cut over all or most of the finalists. Looking at the full list, I remember liking the Godel, Escher, Bach one, as well as the (controversial) one on Sadly Porn. I also liked and rated How the war was won, which made the cut. Given that I found several good ones in the pre-phase, I was hopeful that the quality of the finalists would be really good this year, but personally I was somewhat disappointed and found last year to be much better.
So this could of course just be my personal taste not aligning with the rest of the electorate, - but it could also be evidence that the screening process did not select the best reviews for the finalists.
This was my first time reading pre-finalist reviews — the one on Who We Are and How We Got Here stuck in my head much more than any of the finalists did.
I’ll check out the ones you mentioned! I’m curious if there were really hidden gems that nominations missed out on, or if individual preferences just vary more than I expect.
Just finding out about 2 arms and a head made the review competition worth it for me
It’s possible the complainers are just louder. I thought this years set were all pretty good, and thought I got something out of every review - even the weaker ones or those I didn’t really like.
I don't think any of them was very interesting. Perhaps because it is assumed people have already read interesting ones?
It might have just been luck of the draw due to the sheer number of reviews. I was going to vote in the first round this year until I saw how many I'd have to read.
If this type of comment isn't allowed, I apologize -- but I'm the editor of a humanities journal and we would love to publish book reviews by brilliant ACX readers. We can pay a small amount. If interested, feel free to get in touch:
https://athenaeumreview.org
athenaeumreview@utdallas.edu
It might be worth reading the ones I wasn't able to publish (findable at https://www.astralcodexten.com/p/choose-book-review-finalists-2024 ). If you find one you like, let me know and I can put you in touch with the author.
Thank you very much, I will.
I recommend you check out the review for "The Emperor of all Maladies", I thought it was excellent and was surprised it didn't make the finals.
If we're going to shill for our favourites, let me strongly nudge about the review of The Divine Comedy.
https://docs.google.com/document/d/14qa47TJ_Vyerx4XNgTCIh7PUZ_TOgNcU_eHm5So_zo0/edit#heading=h.4h0wifn3eabc
I hope nobody publishes anywhere my review of the Divine Comedy as it is. It needs a lot of editing.
If you're the reviewer, let me congratulate you! I really enjoyed it, and I liked how you included your own translations to show how English versions make it different depending on the translator (I don't know if it is genuinely difficult, but translations always mention how tough it is to turn terza rima into a corresponding English meter because there isn't a directly corresponding form in English verse).
If you do edit it and polish it up, I would urge you to submit it to the magazine. I think it's worth a wider audience. Even if that is only "people in Texas who read literary magazines" 😀
May I ask, is your magazine's name inspired by the London club? Whenever I see "Athenaeum", I automatically think of that (thanks to marinating in 19th century British novels):
https://en.wikipedia.org/wiki/Athenaeum_Club,_London
So an approval voting question. Not being a voting nerd, it seems to me that if you vote for more than one your vote should be divided between your picks. Pick two each gets 1/2 a vote, three 1/3 of a vote, etc. If all my votes count as one... so if I pick three it's like I get three votes, then the best strategy to get what I want is to approve of (vote for) about half the options. And that seems wrong. Let's say this blog was about 1/2 red and 1/2 blue, and the reviews were also easily divided into blue leaning and red leaning, then voting for all the reviews that leaned your way politically would be the way to reward your side. Or am I missing something?
Let's call the factions "blue" and "green" to avoid thinking about modern politics. Say 3 green politicians and one blue politician are running against each other in a district where 2/3rds of the voters are green. If the greens split their votes across their three candidates as you describe, the blues will win despite being only 1/3rd of the population. (This is called the spoiler effect, where a losing candidate affects the results of an election simply by participating.)
Whereas, under approval voting, the most moderate green candidate is likely to win (being approved of by some of the blue voters). This is a much more reasonable outcome.
Is approval voting how we got that good-for-nothing Aaron Burr as vp though?
Strategic voting does come into play when you decide your cutoff for approval. (Do you only vote for your first choice? Your first two choices? All of the options except your least favorite?)
However, first-past-the-post and instant runoff have worse strategic voting issues imo.
I think there’s no real “should” here abstractly. For some purposes (ones like the case you’re suggesting, with factional voting where each faction gets close to half the things being voted for) your suggestion would be good. In standard electoral contexts, people often have the opposite worry, that too many people will “bullet vote” for just the candidate of their favored party, and not give their “approval” to enough secondary candidates. Your proposal would exacerbate that problem.
Given Arrow’s impossibility theorem (and various extensions of it) there isn’t going to be a single voting system that is best for all sorts of voting. It’s going to depend a lot on the concrete situation and the kind of judgment we want to elicit from people.
Why stop at half? If you pick all of them, you get all the votes! ...Which goes to show that more votes do not necessarily give you a larger influence on the result. Approving of more candidates already dilutes your vote, making it unnecessary to further reduce it.
You're right that you can have more or less influence on the results depending on how many you vote for. It won't necessarily be optimal to approve of half of the candidates, but in some circumstances it might be. One exception would be if you feel much more strongly about your top few choices than about the rest of them, then you would just vote for those ones. Another exception would be if the half of candidates that you like better lines up with the half of candidates that almost everyone else also likes better, then you would be better off voting for only your top quarter of them.
I think it's not great that approval voting incentivizes you to think about these effects instead of purely about which options you like best, but it sure is better about that than when everyone only gets one vote. There you have to coordinate with the half of people who most closely agree with you on which candidate to vote for, or else you'll get the candidate that the half of people farthest from you like better. Overall I would rank the voting systems as:
Ranked Choice Condorcet slightly better than Approval Voting, which is slightly better than Instant Runoff, all of which are immensely better than First Past the Post (the common one).
... And now I took too long so Kenny Easwaran has probably answered this better, but maybe this can still clarify something further.
Oh if you pick all of them then your vote has zero (no) effect. Somewhere around 1/2 has maximal effect.
> Oh if you pick all of them then your vote has zero (no) effect.
This is true.
> Somewhere around 1/2 has maximal effect.
This is... mystical. How are you measuring the amount of effect?
I obtained two of those after reading the reviews. #4 and #14, and of the two, #4 got my vote.
I was re-reading the Creutzfeldt-Jakob disease review and [before I got to that section] connecting it in my mind with Chronic Wasting Disease of deer and goddamned libertarian bubbas in Texas, and there's basically a similar blame game to play as with the UK government. Idiots who thought vanity "deer breeding" was a good moneymaking idea, started transporting deer around the state, like idiots, and Chronic Wasting Disease, which we didn't have in the wild, has been introduced into the state by bringing in deer from places where they did have it, and even now as far as I know is only present in these stupid "captive" situations (has it spread to people? - officially no, but yet there are anecdotes circulating of CJD among deer hunters) to the point that the state now had to set up check stations in a number of places - controversially! - much against the will of the bubba idiots. This is a terrible violation of their rights to get Chronic Wasted, or at the very least to get deer Chronic Wasted.
ETA: I'm sorry for being a crank, but consider this a PSA in some sort - I would not eat venison in Texas if I had not shot the deer myself after observing it for some time. As I do not hunt, I will not be so eating. Sure, it's probably all fine - but I would never say it's fine because elements of the state government said so. Kudos to our parks and wildlife agency for pursuing this against the bitching of the deer "breeders".
Also, it should be noted that those same people raise hell when the subject of protections for mountain lions comes up. (The last TPWD effort in this way resulted in this radical recommendation - people should occasionally have to run their traps so that lions are not left to starve inhumanely; this was fiercely fought.] The eradication of which lions, in many areas, led to deer overpopulation, and a general feeling that "they weren't making deer like they used to" in terms of handsomeness. Which then led to this ridiculous deer-breeding contortion (among the stupid - normal people know this is stupid - you hunt what nature provides for you to hunt, and not inside a game fence).
As usual: just leave nature alone.
Looks like the solutions is obvious: import some lions from other places, continents even, breed them and release them so that they eat the deer! <jk>
(Warning: depressing story)
-------------
My high-school friend's mother suddenly got some sort of spongiform encephalopathy out of nowhere. No one could figure out how or why; no apparent genetic reason, wasn't Mad Cow...
...but she ate a lot of venison her husband shot. They did the butchering and so forth themselves, and apparently there was some suspicion over an ill deer that she and only she "accidentally" got fed by his weird-ass father / her weird-ass husband, who seemed to resent her (& also did, it was said by my buddy and on the grapevine at our small Christian school, even before she began losing her mind).
Supposedly, the Chronic Wasting Disease in deer cannot be transferred to people—but it sure seemed suggestive. Friend's dad / her husband was real weird, real unpleasant; she seemed nice, although after she was driving another mutual friend and I home and went up the down ramp onto the highway, we uh... we didn't drive with her again (and indeed she got the "privilege" taken away for her own good shortly after).
As far as I know, the doctors never did come up with any alternative except "spontaneous" development of a prion disease, but I'm not 100% sure on that.
This was in West Texas, FYI, around 2003-2007 (not sure exactly when it was found out & when she died, but it was something like that IIRC—progression was slower than in BSE, I believe, but still fatal in the end).
That’s so sad and horrible however it came about.
If the review is up to date, no one knows how deer in a sheep pen got it from the sheep.
Infectious things are to be avoided, perhaps, not just *not eaten*.
There’s no proof that CWD is transmissible to humans but I’m not so sure it can’t be, prions are famous for jumping species.
I found an article (Wang et al, 2021) claiming to have generated human CWD in “humanized” transgenic mice.
As a voting nerd, I feel seen. Next year, score voting!
One thing I found strange was that so many poems made it into the finalists. I'm personally not a fan of poetry so I just immediately skipped over those.
I mostly agree with disliking poetry, but the Ballad of the White Horse really won me over.
Is 2 out of 14 really that many? I was surprised there wasn't more, given that Scott had announced he would do affirmative action for poetry and other "non-traditional" review subjects.
FWIW you could count the Poet's Craft Book as a third, being about poetry in general. Whether it's 14% or 20%, given the infinite number of non-traditional subjects to review, it's still overrepresented.
I really liked the English Rhyming Dictionary review and will be pleased if I'm correct that it was Scott's.
I don't like poetry, but ended up ordered the book off eBay and intend to give it a read because of the review.
Next time around could you require people to submit short hooks/summaries of their reviews? I would have like to have read more but it's just hard to pick them from the title alone. Alternatively, you could use AI to generate summaries.
No don't do this, it would spoil the reviews that build up to the main part/thesis without immediately giving it away, some of them even do it in ways that subvert the readers expectations. LLMs are available to everyone, if you want an AI summary go do it yourself.
Sorry, I wasn't referring to the finalists. I meant the beginning stage of the contest. I think it would just make it easier to find more "hidden gems" than by just looking at the title.
Oh, yea makes sense. Though would still suggest an option to hide the summaries or something for those who don't want to be spoiled in the initial stage either.
I wouldn't want that to be part of the contest, but it could be nice for deciding afterwards which of the non-finalists to read anyway.
One of my big takeaways from the limited feedback I got on my non-finalist review was that I needed to explicitly explain at the beginning of the review why the book, which was very old and on an esoteric-appearing subject, is still important and relevant to people who aren't history and/or this-particular-topic nerds. I did discuss this, but I saved most of it for the conclusion when it was probably too late to do much good.
Which was your review? I either read or skimmed all of them and I’m happy to give you my feedback fwiw
On the Bondage of the Will. Thank you, I appreciate the offer.
Oh wow! I’m pretty sure I rated your review a 10. It’s great. I’ll reread it and post some thoughts here by Wednesday
Awesome, glad you enjoyed it!
The first sentence is a bold move. You had me at "five hundred year old theological debate" but there's a reason I don't often discuss such things with other people. There's not a lot of us in this part of the pool. Ah well, like Schopenhauer said somewhere about all his unsold editions: "my works are like a mirror, if an ass looks in, you can't expect an angel to look out."
I really enjoyed the review and hope you publish it somewhere. Just to offer some feedback though:
Call me old-fashioned, but since you initially used Erasmus's Latin title, De Libero Arbitrio, I think you should point out Luther's reply is titled De Servo Arbitrio.
More substantively, I see you already pondered moving your conclusions more to the front. That might well be worthwhile. "The Background" is well done, but it's probably nice to inform a more casual reader as to why these people are worth reading about before launching into detailed exposition.
You should expound more on your interesting insight that today's debates are "rhyming" with the Reformation. I would like to read more about that.
An aside: Luther says: "The human will is like a beast of burden. If God mounts it, it wishes and goes as God wills; if Satan mounts it, it wishes and goes as Satan wills. Nor can it choose the rider it would prefer, or betake itself to him, but it is the riders who contend for its possession." (De Servo Arbitrio, pars i. sec. 24.) I've long maintained that quote is the source of Nietzsche's more famous maxim: “Can an ass be tragic? To perish under a burden one can neither bear nor throw off? The case of the philosopher.” (Twilight of the Idols).
Finally: I was under the impression that one of Erasmus's major concerns in Libero was with the doctrine of exclusive salvation, i.e., it sends the immense majority of mankind to hell entirely irrespective of their will, e.g., those dying in infancy, those who lived and died in heathen lands, etc. to which Luther replied in Servo, "This is the acme of faith, to believe that He is merciful who saves so few and who condemns so many; that He is just who at His own pleasure has made us necessarily doomed to damnation; so that, as Erasmus says, He seems to delight in the tortures of the wretched, and to be more deserving of hatred than of love. If by any effort of reason I could conceive how God could be merciful and just who shows so much anger and iniquity, there would be no need for faith.’ (pars i. sec. 23).
To me, these arguments are some of the most dramatic and may well grab the attention of a more casual reader, so you might put some of that in a revised version. Hellfire gets eyeballs?
Anyway, not that's it not a great review as is, I just thought I would offer some comments. Thanks for writing the review!
> Last year we did ranked choice voting. This year, to satisfy the other half of the voting system nerds, we’re doing approval voting.
Speaking as a voting system nerd, I LOL'ed.
You guys should organise a poll to determine which voting system we use next year. :)
Will you use a Condorcet method next year for the third half of voting theory nerds?
quadratic voting next year
Voted!
I've enjoyed the various book reviews throughout the years, but through a combination of laziness and "dammit I can't decide", never really ending up voting. So this year, with approval voting, and the fact that some reviews were so bad I wanted to spit iron nails, I got off my ass and voted for the 6 best of this crop. Thank you Scott, and ACX for this years book review contest.
I was quite surprised to see #7 leading the Manifold poll this morning, but also, #7 was the only one I felt compelled to tell someone else about, because it was so uniquely and memorably unsettling to read, so I guess that says something.
To satisfy the third half of voting system nerds, can we combine approval voting with rank choice voting next year? Find the Smith group of the set with ranked choice voting, then select the winner as the review with the highest approval amongst that group. I'd be glad to write the software to run the election.
Wheres the Sadly, Porn review?
If you're thinking of the review that was featured on this site, that was Scott's. I believe there was a review entered in the contest, but it didn't make the finals.
That's a shame. It was easily my favorite, wish I knew who wrote it.
It was interesting, a fun bit of erudite razzle, though full of the kind of unearned presumption some people mistake for effective swagger. But – without spoiling its nature – that a jig was on became apparent fairly early, and once the jig was up, the result was confirmed disappointment rather than grudging admiration.
Yes, here’s how the “Sadly Porn” review went for me:
1. This is interesting!
2. But also long. Very, very long.
3. My patience is running out.
4. Screw this, I’m skipping to the end.
5. [skips to the end] oh, so that’s how it is. Nope! Mr. Reviewer, I give you no credit, and may God have mercy on your soul.
FWIW, I didn’t like Scott’s review of “Sadly Porn” either, and if I never think of “Sadly Porn” again, it will be too soon.
Have you actually read the book? Wouldn't blame you if you hadn't, of course. I thought Scott's review was a good read, and an honest grappling bout with the source material.
As to TLP himself... I think he had something worthwhile (and ideologically neutral, despite his self-id as a solid Republican) going on with his critique of precious individualism and the paralysis that comes with scorched-earth ego defence. But also plenty of unfortunate tics – Athens is not quite the killer allegory, advertising isn't quite the killer zeitgeist barometer, etc. I found him fascinating, regardless.
IIRC, someone claimed authorship in an open thread. I'm near certain.
<performs google-fu>
The culprit was Pseudo-Longinus.
https://www.astralcodexten.com/p/open-thread-334/comment/59335674
I want to express my appreciation for all of the book reviewers and for Scott for catalyzing this work.
Thank you to all involved!
I assumed the winner here would be obvious, but then I looked at the prediction market, which was completely different from my picks. At the time I checked, the top two on Manifold were both ones that I was disappointed by and didn't vote for.
This was one of my favorite things I’ve come across on Substack (and how I found this one), incredible job to everyone. There wasn’t a single one I wasn’t fascinated by, but the Real Raw News review was one of the best pieces of writing I think I ever came across. I was among the few that knew of it beforehand — it’s a huge inside joke at my workplace, and a small circle of my colleagues will constantly slack the newest stories to each other. It always just seemed like a silly throwaway thing to me, that review made me actually think about it for the first time. I’ve been raving about that review to all my friends ever since.
The other biggest one I tell people about is Two Arms and a Head, one of the most horrifying things I’ve come across.
Anyways, thank you greatly for this. I’m new here, so I’m glad that it sounds like you did something similar last year, to which I will search. Please get this going again as soon as possible! I may throw my hat in this time
It was my first time too, so great to read all the reviews even if we're not likely to read the books, and I really don't want people to think readers are over book reviews! I think it's wonderful and maybe I'll have a go in future too.
I feel that many of the reviews were far too massive. A review should not be a book with comments on a book - perhaps next time it would make sense to establish some guidelines on maximum length?
These are essay length reviews, mostly. Fairly common in long form magazines.
Am I going crazy or were there five or so more reviews in the original finalist announcement?
Maybe I ought to vote for #7, Two Arms and a Head, for being, to me, a genuine infohazard. I read the review, large portions of the book, a blog linked in the comments by a man dying from cancer, and the rest of the comments all in between my doctor gravely referring me to a dermatologist and actually seeing the dermatologist. I don't know what happened, but I became basically convinced that I too had only a couple of months to live. I had the most bizarre and stressful week of my life, up to and including desperately begging God to spare my life...which I guess he did, because there was nothing actually wrong with me. I'm a really levelheaded person and have a happy life and good health. What on earth happened? Anyway I would vote for it, but I can't look at or think about it ever again, so Real Raw News it is.
[Hugs]
Aaaah I'm so sorry my review made you spiral. From the bottom of my heart, I hope you've been feeling better lately. I've also had my fair share of health scares, and it's a special level of scary that I don't wish on anyone. [Hugs. All the hugs.]
Real Raw News was pretty awesome. I voted for it, too. Good choice.