Pedophilia categorically causes harm to children, so I think that comparing them to other reprehensible people is acceptable (and before you start blathering on about 19-year-olds and 17-year-olds- that's not what the average person means when they talk about pedophilia, I'm certain you know that, and that whole line of argument is deflection from the actual point).
Provided they do not act on their desires, avoid situations where they would be tempted to act on them, and get help, I don't think pedophiles should be, say, stoned in the public square. But I don't think they should be proudly touting their status as a "virped" or "NOMAP" or whatever the latest attempt to turn a mental disorder into a tribe is called unless they're fine with people judging them accordingly. Even if we divorce all moral compunction from it, it still viscerally sits on the same level as someone trying to make a proud, socially-acceptable identity out of the fact they can only be aroused by eating feces.
Because you're just barely too neurotic about your brilliance (mostly kidding, it's a wellspring of fruitful introspection at a high personal cost)
Edit for clarity, by mostly kidding I meant to imply the correct direction is to be less neurotic and own your brilliance, it's probably the only thing holding you back if anything is.
Edit 2: I sometimes over compliment, but since people are more sparing with praise than criticism I felt like hedging against that and don't feel at all abashed about describing you as brilliant.
Yeah, of course he’s brilliant. You are right about the personal cost of being a skosh too self aware and self critical too. But the result reveals a charmingly deep humanity.
Yeah, but "Scott does data journalism about a topic he just became interested in" is probably an endless goldmine. More broadly, I subscribe because whenever he learns about a new topic he generally finds interesting things to say about it.
Agree. It's a pity that the more important something is, the more likely Scott already heard about it a long time ago and got bored of it. I make a conscious effort to minimize my news consumption so that recent doesn't displace important, but then maybe I look like an idiot for not knowing about the latest tempest in a teacup. I figure if anything really important happens I'll probably hear about it from friends. But then also maybe I stay unaware of how awful the reporting is on the Newspaper of Record and lose opportunities to dunk on it and and motivate improvement in media. The NYT and WaPo are so horrible that I completely stopped listening to them ages ago.
As an urban planner, it was fun for me to read his take on why we stopped designing buildings people like. I've been researching that subject for years and he hit on stuff I had never thought of. His mind is quite the floodlight.
Yeah, I think I remember Bill Simmons saying something like that about why he gave up writing columns a few years ago: he only had so many original ideas in his head, and after 15 years or so as a regular columnist, he'd gotten them all out there.
this is very insightful and i'm curious to see what you do in the next phase. personally i really enjoyed Unsong, perhaps something in that direction. Encoding your worldview and insights into fictional worlds might be a reasonable next step, right?
There needs to be an index of Scott's posts, definitely covering the Substack and SSC, and maybe his longer posts on LessWrong and Livejournal. Then if you're interested in a topic he doesn't want to write about any more, you can look up what he said in the index and thoughts that are old to him, but new to you.
It seems like it's always going to be an ask to keep people quite as engaged and excited as they were at the beginning and I would expect almost all of the feeling has to result from 'point 1' sort of considerations. I felt bad reading that reddit thread and knowing you'd see it! I still think your thoughts are worth more than the price of a subscription and hope you know a lot of people still really enjoy your posts :'(
You are, in fact, using simulated annealing wrong. It's the complement of what you describe, which is a classic technique to find a local optimum. SA is adding noise to your small steps, so that you do not get stuck in a local optimum but have the chance to find a better one (even the global optimum) using the random jumps to luckily go over local barriers and fall in neighbor (hopefully better) optimums ;-)
So maybe you should use SA if you feel trapped in comfortable routine but suspect you could maybe do better: just do crazy things from time to time :-p
Could be...but I guess my advice hold: even if changing routine feels risky and really suck most of the time, do it anyway, even if rarely. Turn up the heat before letting it cool back again into a (hopefully better) routine :-)
ETA: Taking the metaphor more comprehensively, it's either a deliberate part of simulated annealing where the noise is specified to decrease over time, or just the natural decrease in step size from any iterative optimization strategy.
The gradient has both magnitude and direction. So the size of the jumps is proportional to the steepness of the error function. For a smooth function, the local extrema will get shallow near the peak, as opposed to a sharp corner, and thus the jumps naturally decrease in size as you get closer, without needing to explicitly build that in (though sometimes it's helpful to do that anyway)
Ah that makes sense. Still something that I'm unclear on: aren't the size of the jumps strictly controlled by the algorithm in gradient descent, and not just naively proportional to the length of the "gradient vector"?
Learning rate decay. Starting with a large learning rate to quickly converge 'near' good solutions, then decreasing the learning rate to settle into an optima.
I think your usage is fine. Greg Kai is talking about how machine learning engineers use simulated annealing to ensure that the neural network is not stuck in local minima by programming in large jumps in the solution space. You are describing how you were naturally using simulated annealing (because you were anyway bouncing around a lot, and didn't need to deliberately program it in), and then eventually started bouncing around much less.
No, I think we're talking about a different effect here. Whether the optimum that Scott is heading for is global or local, he's taking smaller and smaller steps as he gets closer to it.
In a totally different metaphor, you can just say "plucked all the low-hanging fruit already".
I disagree somewhat with Greg here, I think your use of the term is fine. It might be slightly inaccurate I suppose. the point is the cooling down effect you're pointing to, which simulated annealing also has. So depending on what you're emphasizing simulated annealing may be the best methaphor.
There are also (naturally) different forms of simulated annealing, but all simulated annealing over time lower the noise. So in the beginning it encourages big jumps and in the end it discourages big jumps. But the point of simulated annealing is to get closer to the global optimum, even if we're sacrificing some ability to approach the optima that are nearby (the local optima). So perhaps Greg was primarily responding to "you end up at some local optimum".
If you want another machine learning methaphor, you could use the idea of a gradient descent learning rate optimizer. Optimizers explicitly use the gradient to inform the step size of the gradient descent algorithm, so that when there is a steep gradient, the step size is high, and when there is a gentle slope the step size is low. This explicitly looks for the local optimum (which is less of a problem in higher dimensions). See this nice visualization from Sebastian Ruder: https://ruder.io/content/images/2016/09/saddle_point_evaluation_optimizers.gif
Of the two I'd say simulated annealing fits better.
> So perhaps Greg was primarily responding to "you end up at some local optimum".
Actually, I'd say the usage is fully correct. SA does not guarantee a global optimum - just a better chance of getting at it than naive gradient descent, and if not, a very good chance that you'll end up in a close-to-the-global optimum. Pretty much the same way that having exotic experiences and trying out many viewpoints while you're young gives you a better chance at having a more complete view of the world than if you just iterate on what your parents told you.
Agreed. although I do still think that that particular quote is somewhat incorrect as you don't end up at a local optimum in SA, and as you said the point is to get close-to-the-global. But really that's splitting hairs, and it's a good metaphor imho.
Just to add another voice, I think you used it entirely correctly and I'm a mathematician who uses similar optimization algorithms professionally (global optimization by intermittent diffusion most often).
Well, just iterative optimisation: almost all optimisation algorithms take smaller and smaller steps when they converge to an optimum, that's what you describe and indeed also part of the human way to improve at something over time : as you get better, the further improvements tend to become smaller /slower.
SA is the opposite : it's to deliberately add noise to the steps of the iterative optimisation in order to decrease the chances of being trapped in a local optimum. The amplitude of the noise is adapted, depending where we are (# and /or size of iterations) and indeed goes to zero at the end (else you would never converge)...
But it's really an update to what Scott describe (a classic optimisation, done by a human but similar to most algorithm in the sense that steps get smaller when you converge to your optimum): SA would be for Scott to deliberately change his way in non - optimal, random, directions... Mostly when he start to converge (early stages are big and semi random anyway, late stage are the final convergence), but maybe when he clearly has converged (do not see how he could improve) but is somehow dissatisfied with the result.
That woud be SA in a context of self - improvement (and it's indeed something people do, in sport training for exemple when trapped in bad habits (which are in fact local optimums)
I actually think you're mischaracterizing it when you say SA does "the opposite" of what other iterative optimization algorithms do. SA is itself of course an iterative optimization, but when we contrast it with a naive hill climber, the point is that SA can jump from one slope to another, which it does less and less as time goes on.
Yes, it does this in terms of noise, but this doesn't mean it does "the opposite" of a naive hill climber, in fact I'm unsure what that would mean exactly.
> SA would be for Scott to deliberately change his way in non - optimal, random, directions... Mostly when he start to converge
You made a similar point before, you seem to think that SA makes the temperature go up midway through the search , but this is not correct: the temperature always goes down.
But all of this is more regarding SA technical specifics, while Scott's point is aptly described by SA, in fact taking your description here: "to deliberately add noise to the steps of the iterative optimisation (...) indeed [the noise] goes to zero at the end" This perfectly fits with what the metaphor is intended to say, as far as I interpreted it.
I mean that local optimizers (basically all of them, baring some initial landscape sampling and interpolation), takes smaller and smaller steps in the optimal direction (the gradient, when you have it (or have a reasonable approx of it), or the best among a few tested choices.
SA will add noise to those steps, making them non optimal or even not the best among the possible choice (i.e. the opposite of classic optimisation, it's deliberately choosing to do something different even if it is worse). And you select noise amplitude so that it does it in a way that increase the chance of jumping to a better optimum, without slowing down total convergence time too much (that's all the subtlety of annealing (simulated or not), you have to heat up at the right time and cool down at the right rate to get what you want).
SA (to me, but I work in the field, not neural network but classic engineering optimisation) is not an optimization algorithm per se, it's an ingredient you add to a deterministic optimisation algorithm, that will slow it down but decrease the chance of being trapped in a (not so great) local optimum. Basically you add it to a NR/Steepest descent or variants of it.
In genetic algorithms, SA equivalent would be the mutation rate. While what Scott describe would be the fact that initially, the less fit variants to be removed will really be much less fit, while later on, most of the population will have similar fitness. Mutation is an orthogonal ingredient, needed to ensure that you explore more of the design landscape (and decrease the chance of converging to a poor local optimum) but slowing down convergence to a single design....
SA and mutation are really counter-intuitive in the sense that in order to increase your chance to converge to the global optimum (or a good local one), you need to decrease your chance of being trapped in a bad local optimum, and a way to do that is to avoid to systematically change in ways that improve your results the most. That's the only way to cross local barriers that trap you in bad local optimums. But you can't do that all the time or too much, else you end up not optimising anything and just drifting...
I agree with everything here, especially your last paragraph. And now I see what you mean by doing the opposite. You don't mean the entire algorithm is doing the opposite, you mean it does at some steps accept a non-optimal jump, opposite of what optimization algorithms normally do. That makes sense.
And reading over Scott's description again:
> the thing where if you’re doing an optimization problem, you start by making big jumps to explore the macro-landscape of the solution space, then as time goes make smaller and smaller jumps
That really only captures iterative optimization as you said, not SA. Nevertheless I'd still say that the idea of SA as you describe it here really captures the point Scott was going for.
Indeed...I was surprised that so many people disagreed somewhat with me, while also showing they knew what they were talking about. Looking at the english and french version of the Wikipedia page, I start to understand why (I am french speaking): the pages are certainly not incompatible, but there is a subtle langage difference: In English, SA seems to apply to the whole optimisation process. In French, "recuit simulé" apply more specifically to the modification of classic deterministic algo, as a parametrised added noise (T is the parameter).
It's subtle, but explain why I got the simultaneous impression of dissagreing on SA but agreeing of optimisation description.
I still think the french terminology is more useful, as it emphasize the specific of SA compared to purely deterministic optimisation approaches, and is closer to the metalurgic annealing (a way to go out of the local optimum just after quenching by adding thermal noise, but controlled thermal noise to keep quenching benefit and not be back the the completely non-optimal original state.)
But hey, i'm french(speaking) so it would be hard to not disagree with english terminology ;-p
So Scott is saying that his bouncing around between ideas was a demonstration of simulated annealing. He was jumping about a lot at first, and found the neighborhood of his sweet spot, and then started jumping around much less.
What you are saying is that simulated annealing is the process of pre-programming larger jumps early on in the process of finding the global minimum. In some sense, humans, like Scott, are already pre-programmed to do that. Neural networks are not. That is why simulated annealing has to be programmed in separately.
I think Scott is using the term 100% correctly. Annealing literally means "cooling down", and it comes from the idea that hot particles jump around in the energy landscape, while cold ones are stuck in their local optimum. Yes, SA uses noise (controlled by the parameter T, which stands for temperature). But the name-giving feature of the SA algorithm is that you lower the temperature over time, which is exactly as Scott uses the term.
No I think he’s mostly right: in simulated annealing you start with a high temperature (in your teens I guess) that corresponds to a large probability of going uphill to a higher energy state, and then as the temperature drops over time/Scott enters his 30s (this is the annealing part) you converge to always taking downhill steps that essentially behave like gradient descent. Source: I wrote a paper once that used simulated annealing as a gradient-free method.
The intuition here is annealing metal: if you quench hot metal by sticking it in cold water then all the individual molecules drop into whatever low energy state is closest, so you end up with a very chaotic crystal structure. If you instead let it cool slowly then it has lots of time in the high energy state where big non-local changes are possible and so it can converge to a more uniform low-energy crystal structure as it cools.
Exactly. In fact, AFAIK (but metallurgy is not my speciality), it's not only cooling it down slowly, it's also re-heating up and keeping it at a moderate temperature for a time, then cooling it down back to ambiant at the correct rate.
It's clearer in french, where annealing translate to recuit, meaning to cook again, or re-heat. The full heat treatment would be heat to T1, quench, heat to T2<T1, cool it down slowly (time at T2 and subsequent cooling rate is important).
T1 then more or less slow cool down is not (AFAIK) annealing, and convergence to an optimum (which is another way to say your update steps get smaller and smaller) is not simulated annealing in an optimisation context
Your second paragraph describes hardening (T1+quench) and tempering (T2). Annealing is in fact the process of holding metal above a critical temperature (T1), then cooling slowly enough to preserve the maximally unhardened condition, for ease of machining (source: am a machinist).
I don't think that's correct, or at least is off target. The reason it's called "simulated annealing" is that you are mimicking the process of real annealing (in metallurgy) by gradually reducing the temperature, id est you gradually turn down the amplitude of your noise as you settle into the valley that (hopefully) leads to your global optimum.
It would appear your are critiquing Scott's use of "taking smaller steps" because as you (correctly) point out often the individual Monte Carlo step size doesn't change, and doesn't need to. But what he's getting at is that the net excursions (of however many steps you want to average over) become smaller and smaller as you turn down the temperature, and that is both the feature he is describing about his idea of the evolution of his ideas, and also the most obvious feature of simulated annealing simulations.
Maybe there's an actual name for the phenomenon I'm about to describe. I call it "new vs best".
When you write a new post, people tend to compare it to the best work you've ever done (Moloch, or whatever). Statistically, the new post is almost always going to be worse. So it looks like you've fallen off in quality.
But that's an unfair comparison. A fairer one would be to put one of your newer posts against a random SSC post from 2015 - if you do that, I'm confident that your newer writing holds up, and has maybe gotten better.
Another factor is that (in my opinion) things were actually more interesting in 2015. Take neoreaction. Whether you agreed with it or not, that was a fascinating thing that was fun to talk about. It's hard to find an analog for it 2022.
We live in a media landscape where 80% of the air is sucked up by COVID and vaccines and Trump. It's actually boring as hell and I can't wait for it all to end.
I've been calling that effect "Time Distillation", particularly in the context of comparing popular media quality now to that of past eras. Namely, a lot of popular media is mediocre-to-poor and always has been. But it often feels like the new stuff is much much worse than older stuff because the dreck of past eras has been disproportionately discarded and forgotten, as the passage of time has boiled off the impurities and concentrated the timeless classics in our awareness, while the contemporary dreck hasn't yet been distilled away.
The "golden age" of sci-fi -- Asimov, Bradbury, and Clarke -- all seem mediocre in comparison with the state of sci-fi in 2022. I can't imagine how bad the rest of the sci-fi must have been back then. But Orwell holds up. Michelangelo seems mediocre in comparison to corporate drones who build game levels today, so the masses of artists in his day must have been really atrocious.
I'm reading The Dispossessed now, having read The Martian Chronicles (and a greater amount of Heinlein) and seen 2001 but not having read Asimov or Clarke. How does Le Guin stack up compared to them?
"Asimov is good at ideas. Not so good at writing actual humans."
Hm, Liu Cixin and the Three Body Problem (actually the whole trilogy) seem relevant. Amazing ideas, but all major characters are variations of murderous sociopath, or else comic relief, and female characters are either inept, evil or ridiculously idealized supernatural beings of ineffable beauty, but zero of his women are human.
Maybe good scifi is written by shape rotators and good characters are written by wordcels, or to reference an older SSC which posits the same dichotomy in different words:
This may be the bottle of Riesling talking, but if scifi is a (or the) shape rotator genre, is fantasy a (the) wordcel genre? n=1, Tolkien invented several languages
It's really interesting to me that Andy Weir has become a well-known, high-profile sci fi author these days. I still remember him better from the days when he was still an aspiring author back when he was doing the webcomic Casey and Andy.
The final page of the comic ends with a joke based on the fact that an editor had rejected his manuscript for a story saying that it had "too few characters," and suggested that he add more to fill it out. He protested that it was a complete story without room to add more meaningful characters into the narrative, but joked about the possibility of adding another character named Bob to the story by occasionally finishing scenes with the line "Bob was there too."
Not that I don't think Weir is a good writer, I enjoyed his work before he was even published. But I do think his being catapulted to the limelight after years of writing as a relative unknown says a lot about how much that level of prominence owes to luck of the draw.
Asimov tends to write way too many characters, most of whom the reader neither knows nor cares anything about, and this is definitely a failure mode. JK Rowling seems to strike a good balance, with lots of characters but each character having sufficient backstory and feeling like a real person. Andy Weir goes deep with only a few characters. I don't mind the last two.
The Martian and Project Hail Mary are really really good. I don't think it's the luck of getting selected for attention by the media. I think they got selected for attention because they were that good. Same with Rowling. But when Rowling does Serious Adult Literary Shit, it doesn't seem that great. The coincidence is that Potter put her in the right mindspace to do her best work.
Agreed, golden age is still great to read because of nostalgia and founder effect (where new ideas are still plenty to grab and you get the originality bonus of being the first there. And not only bonus, the freshness coming from it is enjoyable ). But I prefer 1990-2010 for a more cynical approach, more complexity and much better literary quality (characterization, writing style,...).
Golden age I enjoyed Asimov, Herbert, Farmer, Pohl, VVogt,Silverberg,....
Later Banks, Stephenson, Varley, Powers, Simmons, King for example, and many others that do not immediately come to mind.
Lately, I do not read so much SF/fantasy...maybe because I get older, maybe because I have less time due to familial activities and so much access to video content, maybe because post 2010 authors sucks (for me). Not clear how much each factor is important, but given how much I enjoyed SF/fantasy before it would be nice to check if factor 3 could be corrected. Any modern author to suggest?
Same thing for me, I loved Asimov and co. as a teenagers but when I tried to reread them later I did not enjoy them that much, as the writing quality is not that great.
Amon recent SF novels, many people seemed to like the The Three-Body Problem trilogy of Liu Cixin. I Loved the Children of Time duology from A. Tchaikovsky and the Interdependency series from Scalzi (bonus, this one is quite funny!).
Thanks! the 3-body was on my to-read list, I didn't know about the others. In addition to the founder effect, there is also a deep resonance between the mood/obsessions of the time and SF/Fantasy. Quite apparent when you look at the golden age (fast tech progress, post WW2, cold war), and post golden age (hippie/drugs/sexual revolution, then early ecology and end of cold war). Maybe I have trouble with the current mood, which could be behind factor 3 (most of current SF sucks (for me))
I like Becky Chambers (Wayfarers series) and Max Gladstone (Craft Sequence). Becky is SF (rockets, aliens) but only as a background. The stories are very character driven, and, realistically, not much happens that is critical. I think there is some similarity with the Firefly TV series (character driven, not really aiming to 'overthrow the galactic empire' or anything else) and Becky's Wayfarers series. If you like that sort of thing then I'd give it a try.
Max Gladstore's Craft sequence is more fantasy. But neither Epic fantasy ala Tolkein nor modern fantasy (e.g. Harry Potter or "So You Want to be A Wizard") but something else. Which was refreshing for me, though maybe it isn't as different as I think -- I'm not really current with fantasy.
In any event, the basic premise behind the Craft sequence is that gods are real (think Zeus or Aztec gods rather than Y*wh) and about 100 years ago human academics figured out how to do what the gods did. And there was a war. And mostly but not entirely the gods lost and so now we have parts of Earth ruled by gods and parts of Earth ruled by the folks who overthrew them. And you can go to college and learn the techniques to be on the 'overthrew them' team. But magic works a lot like contract law, so you can easily wind up DEAD if things go poorly. Complications ensue ...
Ninefox Gambit was interesting and weird, but seemed very classic SF. The idea was explored. The characters ... not so much.
Thanks, I liked firefly (the movie more than the serie), thought it was a very good free "real-life" adaptation of cowboy bebop (all the more since i recently saw the real-life cowboy bebop - yuck) so certainly it's worth a try. Craft also seems interesting, makes me think of American Gods which I like a lot (both the book and the tv serie), although it tends to dilute a little bit with time "à la Lost"....
So yeah, it seems there are still interesting things lying around....maybe it's more factor 1 and 2 that are at play, and having to find new prolific authors I can trust to reliably output stuff I like ( because quite a few of my preferred ones are very unfortunately dead :-( ), and I can not start with time-sorted classics like I did when I discovered SF.
Assassin's Creed Unity (french revolution era) has some great art and architecture which is all based on real French stuff. It's not original but it's better than the Sistine chapel.
I think most of the individual paintings on the Sistine chapel are not great, but there are hundreds of them. Lots and lots of in-game character models look better. I also very often see stuff on deviantart or shutterstock that looks better than one of the extras at the Sistine Chapel.
If you manage to separate the historical\cultural baggage to make a purely aesthetics based comparison, It's not hard at all to find videogame art far superior to the Sistine Chapel. Just recently I was in awe of some of the stuff in Doom Eternal. I'm also a big fan of Halo's Forerunner architecture.
But I think most people aren't really willing to do that separation. Videogames are low status. Michelangelo is ultra status. Comparison is an insult in their minds.
While the greats of history were aiming for aesthetics rather than the sort of qualities that modern artists aim for, I think that measuring one work or another as "superior" has to account for the tools that the creator(s) had to work with.
Michelangelo had to work on his own, on his back, with physical paints dripping in his face, nearly causing him to go blind. The paints themselves also fade over time, and probably don't have the same color balance now that they had over 400 years ago.
It's not like he did things the hard way for extra credit, Michelangelo didn't even want to be pegged to paint the Sistine Chapel in his own lifetime. If he had been born in modern times, if he still decided to become an artist, he probably wouldn't have been a painter or a sculptor. He worked at the cutting edge of the media available to him at the time when he worked, and would probably have been happy to have toolsets with more expansive capacities which didn't get paint in his eyes.
On the other hand, the fact that it took years and nearly made him go blind probably has a lot to do with why the Sistine Chapel is so high status. Video game modelers don't put that kind of sacrifice into their work.
I agree with what you said(*), but I don’t think that Razorback was arguing that “Michelangelo is ultra status” is wrong, per se. I think he’s just pointing out that just because it’s ultra status doesn’t mean that relatively low-status stuff like videogames can’t contain better works of art, just as art. Yes, it’s “cheaper” art in a lot of ways (literally in the sense that many costs are less), but it can be better, and people really do sometimes confuse the two and even feel insulted.
I mean, Stonehenge is pretty fucking impressive, given who made it and how, and I really appreciate it, but it’d be silly to argue that you can’t find better works of architecture in low-status things today, including in stuff like Minecraft, or even public toilets or barns or something.
(*: well, I didn’t know the thing about nearly going blind, so “believe” rather than “agree” there.)
Old scifi is bad compared to new scifi, but old fantasy (Tolkien) is much better than new fantasy.
I'm sure I'm being mega selective, but as a simplified model, it's interesting that the genre that's supposed to take place in the future keeps getting better, while the stuff that's happening in the past apparently peaked in the past, and keeps getting worse.
How about "The Classic Rock" effect. Because that's why Classic Rock station playlists are always 90% the same as they were 25 years ago. The old art we remember is, by definition, a "best of" soundtrack.
I don't think that's it, because there is a common sentiment that the 2013-2016 range contains not just Scott's best post but *all* his best posts, a trend too strong to be pure coincidence.
Not a bad post by any means, but if you're into Scott more for his earnest insight than for his research the only section that really shines is the one about Martians.
The neoreactionaries were interesting when they were talking about bizarre game theoretic arguments for monarchism. But now they seem to have been seamlessly absorbed into the mainstream right, saying the same kind of vaguely racist stuff that any racist uncle will give you, but with longer words
The really interesting thing about Mencius Moldbug was that early on he was saying that the political right should just give up, because they can only be a phone opposition, thus forcing the left to act responsible. Then he changed gears and started talking about a "True Election" that could bring a President Palin to power. Less interesting, more conventional.
I think Moldbug and most of his neoreactionary buddies realized they could be ideologically-pure (and thus doomed to so much street-corner ranting) or they could be effective (compromising with the mainstream could give them a chance to try and steer the mainstream in a direction they like).
Is he really that effective? He's on a Substack now, and he has been cited by Greenwald, but I don't know that he can claim to have affected any policies (as Scott is saying with COVID above).
Well, I think it's fair to say that the Republican Party of 2016+ is a lot more Moldbuggian than the Republican Party of Mitt Romney or John McCain.
It's impossible to say how much credit (or blame, depending on your point of view) Moldbug personally gets for this, but I think it's fair to say he's been an important voice in a movement that for better or worse has opened a lot of young right-wingers' eyes to the idea that there might be more to right-wing politics than just "repeal Roe" and "tax cuts for the rich".
There has been a turn against the neocons (which Moldbug himself once supported). But I think that's more because GWB left a legacy of failure people no longer wanted to be associated with.
I think that trying to assert that the right-wing mainstream moving further towards Moldbug's worldview and Moldbug tacking further towards the right-wing mainstream as being wholly unrelated or purely just Moldbug "selling out" is a bit contrarian. Something doesn't have to be labeled "The Mencius Moldbug Bill for Installing the Monarch via Salami-Slicing" for it to be clearly influenced by the Dark Enlightenment he helped pioneer.
He's effectively monetized his bullshit, so there is that.
Dude should have just gotten cut a giant check for his Cathedral stuff and sent out into the internet to do more interesting shit, instead of having to pay bills.
After the election Moldbug was posting about how if the Republicans were *really* serious about winning an election, they would have had state legislatures overturn the results and install new electors while Trump used the military to seize power. Which definitely sounds "interesting" to me, but I suppose it's more conventional than "we should have a king who secures his power with cryptographically-locked guns."
Consider the midwit meme - neoreactionaries seem to reach the same conclusions as really stupid people, but for very sophisticated reasons.
My main problem with neoreactionaries is that they tend to be such habitual contrarians they can't admit that mainstream sometimes is, in fact, right, or a progressive position actually makes sense. A lot of Moldbug's posts seem like a purely intellectual exercise in Devil's advocacy.
"Better than the Beatles effect" is one name it's been given. In statistics terms, it's just a trivial order statistics fact: the probability of the next sample being the maximum out of the n samples so far is just going to be 1/(n+1).
I don't think you suck, or that you have gotten worse. I've been reading you since, I believe, "The Toxoplasma of Rage," and that has been awhile. You have your hobbyhorses (AI risk, prediction markets, predictive processing), but hey, who doesn't? Like you, I've thought about the big high-level stuff, and know those debates as deeply as I want to. Grand pronouncements aren't needed. I'm here for the insight porn--give me that on any topic, any level, and I'll be delighted to read it.
This is where I am as well, long-time reader, no sense you've gotten worse. Also very glad to hear about the community-oriented projects. But it's your writing around mental health stuff specifically that brought me and kept me here, and that's you writing from expertise grounded in practice (in addition to all your other good qualities), which is a somewhat different place you write from than the other pieces you write.
Same for me! I have been a great fan of SSC since 2017 and read most of the older posts, and I am now a great fan of ACX. Yes there is some evolution in the content, which is great, it would be kind of worrying if Scott did not change at all!- but It is still fascinating.
This is probably not a very central case, but: my now-husband introduced me to SSC in 2018. He and I used to read aloud SSC articles to each other while hanging out on a Sunday afternoon. We are definitely NOT grey tribe / rationalist folk, but sitting down to read one of your articles always felt very cozy, like just hanging out with a friend who was earnest, thoughtful, funny, and way smarter than us. I still get excited when I see a new AXC newsletter, but the impression I'm getting is that the content is growing more niche, and more and more I find both it and the community around it a little alienating. Still net positive for me though!
I could have written this same comment and consider myself solidly blue tribe (though perhaps with a greyer tinge than some of my peers, possibly in part due to the influence of this blog). For me the biggest decline in quality has been the comment section. I'm looking forward to an improvement now that there's a report function.
Scott, I sincerely admire your brilliance and your achievements. I'm only posting this in response to your direct question.
I learned from you, and now try to practice as a life principle : let what I say be truthful, necessary, and kind. On this basis, I was dismayed by your jokey headline, "My Ex is a Shit-eating Whore". It didn't seem very necessary or kind.
Thank you for clarifying, and I'm sorry for making the assumption. I guess I missed the attribution somewhere? I will delete my post, unless you feel there is some value?
Aella posted on twitter that she thought it was a "incredibly sweet ad" (or "very sweet" or something, not sure about the exact adjective). If she doesn't mind, I don't think there's much of a point complaining. Nate may have even asked her before posting it.
The attribution is confusing; the first instance of 'I' in the document links to the author's Twitter. (I thought it was better-attributed but that was because I already knew the context. Illusion of transparency, whoops!)
Also since it doesn't look like anyone else mentioned it, Aella is a sex worker and received a fecal transplant: hence, 'shit-eating whore'. I imagine all involved find it to be a cute joke, though I can totally see how it might seem in poor taste from the outside.
Hi George, it sounds like you're directing that question to me..? Yes of course, I did read through a couple of times in sheer disbelief, and cringing all the way through, in my mistaken belief that Scott had written that "review" of Aella.
Thank goodness this imaginary drama had a happy ending, so to speak.. ;)
One other factor you didn't mention but I think was a factor in my own perception of slowed insights was that when first reading ACX I got to consume your best written and most insightful posts of the last 7-8 years in 3-4 weeks. Twice per week I got to read one your top 10 posts. But now that I've caught up, these posts only come once a year which definitely feels slower by comparison.
On this point specifically, I wondered if there was any appetite for reposting some of your 'greatest hits' (maybe even from the pre-SSC days) here on ACX, so the community can enjoy reacting to them in real time
For what its worth I haven't noticed a decline in quality, and for example the ivermectin article was probably as good and more important as anything you wrote in the 2013-16 period you cite as being a high point.
I would greatly enjoy this - I found ACX in 2021 and giving some air time to the "Greatest Hits" dating back to pre-SSC would be wonderful. Please consider doing this, Scott, as you are so prolific I'm absolutely certain I've missed many gems.
In case you don't want to wait for Scott to do this, you can always browse some of the older compendiums of SSC +LW articles compiled by other people; my favorite is "The Library of Scott Alexandria", which attempts a categorization of sorts:
I would like that a lot. Especially if they were edited/updated, either throughout or with post-scripts, on how Scott's views have evolved since the article was first written.
Same for me. Falling down that particular rabbit hole ("The hammer and the dance" was my entry point) and seeing how much there was to discover, and how it resonated, was an intellectual rush. So, first crush, rose tinted glasses, lifelong hopeless romantic - or, if you prefer - the elusive chase for that feeling of the first high (not that I would know about that, though)
Interesting! I've been reading Scott's blog since it was a LiveJournal, so never got an intense rush of great posts - it's always been a slow drip. From my perspective, the quality hasn't dropped off at all; in fact, when I went back over his recent posts to pick my favourites for the ACX reader survey, I was amazed how many top-notch posts there had been since he started the new blog.
I wonder if many folks in your audience are at similar places on their developmental timelines as you, and are projecting changes (less excitement) they feel about themselves onto you.
A totally different theory, I wonder if the hiatus you took after the NYT brouhaha actually undid some optimizations, and you're finding your way back to the local minimum. (In other words, you were a bit rusty for a while.) But this doesn't match my impression of your writing.
Finally, you also have other things going on in your life. People famously get a bit more boring after they get married and have kids.
I'm blogging a lot less than I was when I started out (my first post was in September 2007), and I'm neither famous nor do I have interesting things going on in my life. On a related note, I went through a stretch where I wasn't reading nearly as many books as I used to, but now I'm trying to shift back (which is where the material for my blog posts comes from now that there's fewer other blogs to write about).
Remember the somewhat rambling post about the cliche where various colored pills gave superpowers and you fleshed out a world where certain people took those pills and that BRUTE STRENGTH won the day upon the heat death of the universe? That was such a fun thing to read, and the kind of thing I'd totally read more of even though it wasn't particularly intellectual. (I love the intellectual stuff too, but those fun posts are the kinds of things I think people are missing)
Googling “red pill” leads to FASCINATING OPINIONS. I liked how the story ends with galactic civilization being saved in the dumbest, rules breaking way possible
Perhaps this is tangential to what you're writing here, but... I have to actually write this out at some point, but while I've always had my issues with the rationalist community, when it was a smaller niche it was always rigorous. I could always expect real grappling with evidence and an acknowledgement of the complexity of the world. And while I can't say that any individual has changed for the worse (and am not accusing you of this), I think that as the community has grown it has become, for lack of a better term, a meme community. By that I mean that the larger rationalist community seems to me to be more and more defined by a collection of REFERENCES rather than a mode of thinking. So where once a reference to motte and bailey was taking advantage of a useful acrostic for beginning a conversation, one that recognized that there are limits to those kinds of metaphors for thinking, now that point is merely to say the term to indicate insider status. It's a devolution into magic words philosophy, where people launder incuriosity through these terms and ideas. The holy texts cease to be invitations to complicated conversations and become instead places in which to hide, intellectually.
The thing is... I don't think there's any way that an intellectual tradition like rationalism can grow without that happening. It's an inevitable artifact of getting more popular. There's still tons of great and stimulating conversations happening under the banner. But part of my reservations about Julia Galef's book lies in this seemingly unavoidable consequence of broadening the appeal, the tendency to fall into "one weird trick" approaches to critical inquiry.
For the record I don't think your work is any worse than it has been in the time I've been consuming it. I do think the commenting community reflects the meme philosophy I'm talking about, sometimes, though I can't pretend to be a very rigorous reader of the comments.
My own feeling is that this stuff was already in a pretty advanced state of memeification by 2015-2016 when I first become aware of it. Maybe one way of looking at it is that different corners of a movement will have different balances of memes and real content, and you have to be alert to that when you are deciding where to hang out.
What's your 'sampling method' for measuring "the rationalist community"? I don't think the commenters here are a representative sample. (I don't think there _is_ an easy way to find a representative sample anymore. The (original) 'community' dispersed years ago.)
Personally, I say (half-jokingly) the rationalist community died when I said that the bailey was not (as Alexander said) the good land around the castle. It was a fortified courtyard. No one ever had fields in a bailey. It was part of a defense in depth strategy. The motte was a high tower that was purely a defensive structure. You'd then have a bailey (or better a series of baileys) around it connected by defensible ramps or bridges. So while it was a cool metaphor it was not all that accurate.
The person responded it was a metaphor and I was being pedantic with a tone that I'm very familiar with. That high school tone that says: You may be right. But you're uncool for being right. Your rightness makes you not one of us. I wasn't upset but I was disappointed in how utterly uninteresting and predictable the response was. There is now a rationalist clique and I was being told that if I don't get on side I'm going to have to go sit at a different table.
I don't blame rationalists for being like this exactly. But it isn't what I come here for.
Thanks for clarifying, because the conflict between the internet-rationalist definition of a metaphorical "bailey" and the actual castle "bailey" had always confused me.
Yeah. To draw it out completely: Scott wrote a piece repeating a work by Nicholas Shackel. Shackel said that said the motte was the castle and the bailey was the valuable farmland around the castle. Old feudal lords would (according to him) have their peasants farm in the bailey. When raiders came by everyone would go hide in the motte and rain arrows down on them until they left. Then they could get back to farming.
Motte and bailey is when you claim a huge but useful field that's epistemologically indefensible. If someone challenges you then you retreat to a smaller but more defensible claim. Then once the confrontation is over you go back to the wider but indefensible claim that's useful.
It's a useful enough concept. But that's not what a motte or a bailey is.
Side note: I'm often surprised by how little people know about castles considering their gigantic cultural imprint.
"A Motte and Bailey castle is a medieval system of defence in which a stone tower on a mound (the Motte) is surrounded by an area of land (the Bailey) which in turn is encompassed by some sort of a barrier such as a ditch."
Now Scott, referencing Shackel,
"The writers of the paper compare this to a form of medieval castle, where there would be a *field* of desirable and economically productive land called a bailey, and a big ugly tower in the middle called the motte. If you were a medieval lord, you would do most of your economic activity in the bailey and get rich."
Field here might mean farmland.
Now Wikipedia,
"The bailey would contain a wide number of buildings, including a hall, kitchens, a chapel, barracks, stores, stables, forges or workshops, and was the centre of the castle's economic activity."
Scott seems pretty close here, except maybe he intended the word field to imply farmland and that would be wrong? Although, see Carisbrooke Castle from the Wiki, where farming fields seem to be depicted as being inside the bailey.
Unfortunately the description of action (running away from raiders) firmly cements that in their minds the bailey was outside the wall. Otherwise they'd be implying the proper thing to do in an invasion was to run inside the walls then give up the outer layer of defenses and run further inside.
That said the metaphor isn't awful because, terms aside, it is an accurate description of the purpose of a castle (especially one with a large bailey) relative to the farmlands around it. It just gets the names wrong.
I agree that representing the bailey as farmland is not accurate (after all, walling in your farmland is known as "a border wall"), and low-intensity farming is obviously the vast majority of economic activity in mediaeval times. But I wouldn't say that the bailey's necessarily completely unproductive either; it *could* be in some castles, but AIUI frequently things like horticulture and markets were inside the bailey walls and those are certainly of high economic value *per acre* (as opposed to total).
I think the error was introduced by Scott - if Wikipedia’s page is to be believed, the original formulation was by Nicholas Shackel, and his definition seems to get the definitions of motte and bailey mostly right.
I think the metaphor works just as well if you use the correct definition of bailey - either way, it’s a broader, more useful, but harder to hold area than the “insalubrious but defensible, perhaps impregnable, Motte”.
I think I knew what the bailey was from the beginning and it seemed to fit the metaphor, perhaps because I was thinking of it geometrically, the core and the area around it. If Scott at some point defined it as the good land I don't think I noticed.
The metaphor is directionally accurate. The motte is a purely defensive structure. It is surrounded by the bailey, which has a lot of actually useful and productive stuff and is less well defended than the motte. In the face of a sufficiently severe attack, you abandon the bailey and retreat to the motte until you've driven the attackers away.
Claiming that the bailey is farmland is technically incorrect. And if you're not e.g. using that technicality to derail a substantial debate over something else, you should get nerd credit for pointing out the technically correct version. Not some smug "go away troublesome outgrouper" response.
But, directionally accurate is about all you can expect from a metaphor, and once one is accepted into the language or jargon, meh, forget it, Jake, it's Chinatown. Well, a few miles north of a Chinatown.
I find it a little odd that you let one person who said that to you in that manner, kill the rationalist community in your mind. There's always assholes everywhere which I assume you are aware of so I'm guessing I don't understand your point?
I did say half-jokingly. The non-joking half is that such a statement doesn't happen in a vacuum. The fact the person even tried implies the existence of the cliquishness I'm put off by. And I've not seen anything to change my view subsequently that, to use the in group language, the ideology is not the movement.
“That high school tone that says: You may be right. But you're uncool for being right. Your rightness makes you not one of us.”
Not saying that phenomenon doesn’t exist, but I don’t think that’s what happened here. What happened here is that you were technically correct but missing the point. You were letting pedantry wreck a useful metaphor without providing an alternative with the same level of utility.
Nobody likes a pedant - not because being correct is uncool, but because having someone constantly show off how smart they are by “well actually-ing” trivial points makes it harder to have productive/interesting conversations.
I think this depends a bit on Erusian's intention, if they simply wanted to convey a neat history fact, and then got slapped down I could see that being an unpleasant experience. If their point was that the metaphor was bad because of this detail, then that is literally pedantic. Even assuming the former though, I think it is possible that the person who replied to them assumed the latter.
So, this interaction doesn't move me much in either direction.
See, this is a cliquish attitude. The underlying assumptions here are:
1.) The tribal totems of the group (such as a specific metaphor) are more important than literal correctness. You're also inventing a scenario which makes me wrong. That is a somewhat defensive reaction and seems soldier-y.
2.) I deserve to be socially punished (someone "nobody likes") for challenging a commonly held but false belief among the group. (In this case, the definition of bailey.) If I am going to correct it I must leap through certain hoops to make the criticism acceptable, effectively requiring a loyalty test. ("providing an alternative with the same level of utility.")
3.) If I want to be part of the group I need to accept group policing on what is trivial and what is central. You're telling me that using a word in a wrong way (which confused at least three people) is unimportant. I don't think so. But you, as a member of the community, have decided against me.
These are all important things for maintaining group cohesion! But they're not rational. They make you more wrong, not less wrong, in the interest of collective wrongness creating group cohesion. That's literally true here: you're using a word wrong.
I agree that this is a cliquish attitude, but I'll disagree with your analysis of the use of 'motte and bailey'. Since it has become a standardized term of art (or a cliché, if you prefer) its meaning has shifted, and its etymology has become irrelevant. That's what happens in language. You bolster your arguments even without the use of a pillow, and you police language even without the use of a badge and a gun. Once something is metaphorical, it cuts itself loose from its original meaning and wanders off on its own. We may not like it, but we can't change it, and complaining about it will have no effect.
End of sermon. I agree that there's too much grumpiness on this blog at the moment, by the way, and I'm trying not to be grumpy about this.
I think the issue here is twofold: Firstly, people who are familiar with the normal definition word will be confused by the new definition. Secondly, it will misinform rationalists as to what the word means. Basically, it cuts off two linguistic communities by making them unable to communicate unless the full context is understood. But I have no objection to words changing per se.
Etymology may not matter for words, but I don't think etymology ever becomes irrelevant for a phrase that's a metaphor. For instance, think of the phrase "toe the line" -- it means to follow the rule precisely. The phrase means to metaphorically do the equivalent of what athletes do at the start of a race, when they bring their feet so close to the starting line that their toes touch it, but don't go over: they toe the line. Now some people write the phrase as "tow the line." Everybody still knows the phrase means follow the rule precisely, but the metaphor is lost, because the phrase is mostly senseless, and what sense it has has nothing to do with precision and rules. You could just as well declare that "blubbering turd" means to follow the rules precisely.
I want to first say that 1) I am not part of the rationalist community, and in fact probably someone who's about as far away from the rationalist community as possible in geography, mindset, and social class, and 2) I really want to phrase this in a way that's less confrontational but am having an extremely hard time doing that right now. With that said, this seems like a fully-general argument against engagement with any culture or social group or any language that does not have perfect and immutable sign-signifier linkage built into it (such that metaphor, analogy, terms of art, etc. are considered to be as severe a violation of grammar as me said this sentence). I'm not quite sure what a "non-cliquish" or "perfectly rational" interaction by your standards would look like. I'm not sure it would even be possible, even in Raikoth.
Not really. It's extremely simple. You have to define your terms. It's like the famous example: π < 2. Now, you can read this one of two ways. Obviously wrong if you take π to mean (as it commonly does) 3.14 etc. Or you can, in context of the equation, realize that π is being used as a variable that's about 1.3. And then someone can say, "You know, it's a little weird using π to mean anything other than 3.14." And the other person shrugs and says they like to use it. And everyone understands what's going on.
Now, if the person said, "No, it's the same π that is used to calculate circles and it's less than two." then they'd be incorrect. (Or have some mathematics above my pay grade to back it up.)
Yes, but if people need to engage in heavy circumlocution about every single word, you're going to end up setting the barrier for entry to communication so high that people (at least, the kind of people who exist here and now as opposed to some New Rationalist Human) just are going to not engage in it- at least, not with the people who insist on setting that barrier up. The important thing about communication is that the root idea is conveyed, not that every word perfectly aligns with what is written in a dictionary. That isn't a "cliquish definition", that's just how language actually works; cavemen didn't have dictionaries to point at when they needed to communicate the idea "A lion is coming, we need to leave or it will eat us." As others have pointed out, "motte-and-bailey" conveys a complex idea in three words, and does that regardless of whether bailey means exactly what the man who coined the term thinks it means, and in fact still works as an analogy even with the meaning of bailey you're insisting be acknowledged. The idea that the whole phrase is poisoned because of one minor factual error is, essentially, throwing the entire nature of language out with the bathwater.
1) You’re asserting your own totem here: literal exact correctness is the most important thing, and anyone who disagrees with you is “cliquey”. That’s a very weird attitude to have about a metaphor, of all things, and frankly pretty exclusionary on your own part.
2) I’m not saying you deserve to be punished, I’m saying you don’t deserve the praise you’re demanding for pointing out a correct, interesting, but not really relevant factoid. You accuse me of being “soldiery” but honestly you seem to be the confrontational one here. You seem to demand some sort of prize for superior intellect, and having failed to receive that, decided “you all are cliquey losers, I’m taking my ball and going home”.
As for hoops to jump through - no one is saying “you can’t be a rationalist if you don’t believe this incorrect definition of ‘bailey’” they are saying “motte-and-bailey is a useful metaphor for thinking about a common rhetorical strategy, this is true whether or not it gets ‘bailey’ exactly right, so we’re going to keep using it”.
3) Nobody is “policing” you out of the group. Nobody even really has that authority, least of all me. You’ve voluntarily decided to leave, because you can’t stand the fact that the name of a metaphor representing a rhetorical concept sort of relies on an incorrect definition of one of the words in the name of the metaphor.
I’ll add to that that even using the correct definition of “bailey” doesn’t change the metaphor - either way, the “bailey” is more generally useful, the “motte” is more defensible. That’s the only bit that’s relevant to the metaphor, the rest is just color.
So yes, I stand by my assertion that the distinction is trivial (for the purpose of discussing rhetoric. It clearly is not trivial for studying medieval settlement construction).
A key point from Erusian was that "you're also inventing a scenario which makes me wrong" and I'm inclined to agree. You did not witness the exchange; you are choosing to interpret Erusian's description of the exchange in a way that makes him look bad.
As an aspiring rationalist, I'm inclined to think you're hurting the reputation of rationalism with the approach you're taking. (even if you don't identify as rationalist.)
The bailey was kind of a fortified patio, but it was also a prison for serfs kept safe from bandits by the lord of the motte so he could profit from oppressing them. Um. I think of the metaphor as a polite way of saying 'bait and switch' without calling the other person a liar. And I think it works pretty well. It makes arguments more reasonable without harming any actual serfs or their lord's economic interests.
. . .
I don't think Scott's quality has changed much, but the comments section used to be a lot more right-left confrontational. If that comes back the place will probably be purged.
Scott gives his (or Shellack's) definition of bailey. It's "a field of desirable and economically productive land [around the motte] called a bailey." You go into the motte to avoid attacks and then go back to farming in the bailey when the raiders are gone. So it's clearly a mixup.
I'm just going to say it. Scott would make a bad feudal lord. Any lords looking for advice on how to run their fiefs should read someone else.
For years I thought that a "motte" was an olde Englische word for a "moat", and a bailey was either where the bailiff lived, or another old word for a "jail" or a dungeon. A bailiff is a kind of jailer or dungeon-keeper, right?
So when your warriors could no longer defend the water-filled "motte", they retreated to the strong stone "bailey" for a last-ditch defense. Which was precisely backwards.
Motte and moat actually were the same word. Motte literally means "hill" or "mound" but was extended to mean the earthworks (both hills and trenches) around a fortification. Eventually hills became less common and ditches filled with water became more common and so the motte came to refer mostly to a ditch filled with water. Now we say that a motte is the hill and the moat is the wet fosse. But they were the same word.
A bailiff is the person in charge of a bailiwick. A bailiwick is literally "a handed over/carried house." A bailiff is literally a carrier or bearer. Implicitly of something handed over. A porter in some cases. Wick means a home/house/village and therefore metaphorically an estate. So a bailiwick is a handed over estate. And a bailiff is the person in charge of it. But not the owner. This role as the person in charge but not the actual owner later created a royal class of officials as the king delegated various duties to the people he appointed bailiffs.
It's a false friend with bailey which probably derives from the Latin vallum meaning a type of wall (related to "poles, stakes" as in a palisade). A bailey is literally a walled area. Though some people think it might be related to the other word. However, bailiff is not the term for someone in charge of a bailey. It seems like in most of Europe the person in charge of a bailey was in charge of the gate that let people in and took something like that for their title. (Though, confusingly, sometimes bailiffs were in charge of what were called baileys! "Handed over things." And it wasn't uncommon for one person to hold two positions.)
Ok I know something about Marxism, and i think Cassander is right, but also your post does not in any way contradict this: "I don't think there's any way that an intellectual tradition like rationalism can grow without that happening. It's an inevitable artifact of getting more popular." The pot isn't calling the kettle black, the pot is pointing out that this problem is unavoidable....
Cassander is a rather prolific commenter in the rationalist community. My strongest association with Cassander is probably 'pro-capitalism' so I wouldn't be surprised if Freddie has read plenty of posts by Cassander about Marxism in the past.
This is such a pitch perfect response. Not even an attempt at an argument, just immediately lashing out with one of the oldest and laziest of ad hominem cliches. You've proven my point better than anything I could ever say.
And *your* attempt at an argument-that-was-definitely-not-an-ad-hominem-cliche was...what, exactly? What further response is called for when your comment was basically just "Says you, commie"?
He didn't "make a reasonable point," he just said "you're a communist, therefore a hypocrite, therefore I have verbally owned you by pointing these two things out." It's an anti-point.
Pointing out that communism is known for the same lack of curiosity that FdB is accusing rationalism of is interesting to me. I would have enjoyed a substantial response.
If Cassander made a point about the intellectual habits of communists, that would be one thing, but he didn't. He *implied* the existence of such a point as scaffold for a textbook ad hominem tu quoque ("you do X therefore your criticism of X is null").
Not saying Freddie's response was *ideal*, but Cassander's initial reply was definitely "coming in swinging".
Is it known for that though or just "known" for that. Despite being a bit left of the average reader of this blog (but definitely right of Freddie!) I'm actually not a fan of the reasoning style of most of the communists I've encountered. And whilst I've not read Marx, my prior that he is right on most of whatever claims divide him from the average academic economist right now is low. But just stating as a fact that your ideological enemies are especially group-thinky is both question-begging and not really very useful.
This isn't a reasonable or even a point at all. Just look at their next reaction, a classic "not even an attempt at an argument..." - When he started with a lazy "har gar communist is magic". A pot and a kettle indeed.
I think that Eliezer intentionally baked memification into his Sequences from the beginning. It's part of why I never actually managed to read them - I just couldn't stand the excessive appeal to pop-culture versions of martial arts and of Zen.
This isn't to say that you're wrong, but, well, the snake was in the garden from the beginning, you know?
Pedophilia categorically causes harm to children, so I think that comparing them to other reprehensible people is acceptable (and before you start blathering on about 19-year-olds and 17-year-olds- that's not what the average person means when they talk about pedophilia, I'm certain you know that, and that whole line of argument is deflection from the actual point).
Provided they do not act on their desires, avoid situations where they would be tempted to act on them, and get help, I don't think pedophiles should be, say, stoned in the public square. But I don't think they should be proudly touting their status as a "virped" or "NOMAP" or whatever the latest attempt to turn a mental disorder into a tribe is called unless they're fine with people judging them accordingly. Even if we divorce all moral compunction from it, it still viscerally sits on the same level as someone trying to make a proud, socially-acceptable identity out of the fact they can only be aroused by eating feces.
Because you're just barely too neurotic about your brilliance (mostly kidding, it's a wellspring of fruitful introspection at a high personal cost)
Edit for clarity, by mostly kidding I meant to imply the correct direction is to be less neurotic and own your brilliance, it's probably the only thing holding you back if anything is.
Edit 2: I sometimes over compliment, but since people are more sparing with praise than criticism I felt like hedging against that and don't feel at all abashed about describing you as brilliant.
REPORT
Yeah, of course he’s brilliant. You are right about the personal cost of being a skosh too self aware and self critical too. But the result reveals a charmingly deep humanity.
Well there are only so many good ideas out there
Hmm I think someone needs to reread the Fun Theory Sequence ;)
Yeah, but "Scott does data journalism about a topic he just became interested in" is probably an endless goldmine. More broadly, I subscribe because whenever he learns about a new topic he generally finds interesting things to say about it.
Agree. It's a pity that the more important something is, the more likely Scott already heard about it a long time ago and got bored of it. I make a conscious effort to minimize my news consumption so that recent doesn't displace important, but then maybe I look like an idiot for not knowing about the latest tempest in a teacup. I figure if anything really important happens I'll probably hear about it from friends. But then also maybe I stay unaware of how awful the reporting is on the Newspaper of Record and lose opportunities to dunk on it and and motivate improvement in media. The NYT and WaPo are so horrible that I completely stopped listening to them ages ago.
As an urban planner, it was fun for me to read his take on why we stopped designing buildings people like. I've been researching that subject for years and he hit on stuff I had never thought of. His mind is quite the floodlight.
Yeah, I think I remember Bill Simmons saying something like that about why he gave up writing columns a few years ago: he only had so many original ideas in his head, and after 15 years or so as a regular columnist, he'd gotten them all out there.
I don't think Scott even made a dent in the amount of potential good ideas.
The links on "I continue to post some vaguely anti-woke stuff (1, 2, 3)," seem to be partly incorrect: both 2 & 3 go to https://astralcodexten.substack.com/p/too-good-to-check-a-play-in-three .
Yeah I noticed that, then tried to figure out if it was some subtle joke about that post (if it is I don't really get it).
this is very insightful and i'm curious to see what you do in the next phase. personally i really enjoyed Unsong, perhaps something in that direction. Encoding your worldview and insights into fictional worlds might be a reasonable next step, right?
I may be young at 31, but I'd still greatly, greatly enjoy you writing new blog posts about religion, abortion, etc
Seconded.
I think that's his point. These things matter when you're younger and those ideas seem fresh. He's no longer there
There needs to be an index of Scott's posts, definitely covering the Substack and SSC, and maybe his longer posts on LessWrong and Livejournal. Then if you're interested in a topic he doesn't want to write about any more, you can look up what he said in the index and thoughts that are old to him, but new to you.
I'd consult that
It seems like it's always going to be an ask to keep people quite as engaged and excited as they were at the beginning and I would expect almost all of the feeling has to result from 'point 1' sort of considerations. I felt bad reading that reddit thread and knowing you'd see it! I still think your thoughts are worth more than the price of a subscription and hope you know a lot of people still really enjoy your posts :'(
You are, in fact, using simulated annealing wrong. It's the complement of what you describe, which is a classic technique to find a local optimum. SA is adding noise to your small steps, so that you do not get stuck in a local optimum but have the chance to find a better one (even the global optimum) using the random jumps to luckily go over local barriers and fall in neighbor (hopefully better) optimums ;-)
So maybe you should use SA if you feel trapped in comfortable routine but suspect you could maybe do better: just do crazy things from time to time :-p
I guess the idea is that he started out with a lot of noise, and now he's settled into a end-stages of the annealing.
Could be...but I guess my advice hold: even if changing routine feels risky and really suck most of the time, do it anyway, even if rarely. Turn up the heat before letting it cool back again into a (hopefully better) routine :-)
What would the right term for the thing I'm talking about be?
Gradient descent?
ETA: Taking the metaphor more comprehensively, it's either a deliberate part of simulated annealing where the noise is specified to decrease over time, or just the natural decrease in step size from any iterative optimization strategy.
Isn't that just moving in the opposite direction of the gradient of the error function, without taking the size of the jumps into account?
The gradient has both magnitude and direction. So the size of the jumps is proportional to the steepness of the error function. For a smooth function, the local extrema will get shallow near the peak, as opposed to a sharp corner, and thus the jumps naturally decrease in size as you get closer, without needing to explicitly build that in (though sometimes it's helpful to do that anyway)
Ah that makes sense. Still something that I'm unclear on: aren't the size of the jumps strictly controlled by the algorithm in gradient descent, and not just naively proportional to the length of the "gradient vector"?
Learning rate decay. Starting with a large learning rate to quickly converge 'near' good solutions, then decreasing the learning rate to settle into an optima.
I think your usage is fine. Greg Kai is talking about how machine learning engineers use simulated annealing to ensure that the neural network is not stuck in local minima by programming in large jumps in the solution space. You are describing how you were naturally using simulated annealing (because you were anyway bouncing around a lot, and didn't need to deliberately program it in), and then eventually started bouncing around much less.
Trapped in a local optimum?
No, I think we're talking about a different effect here. Whether the optimum that Scott is heading for is global or local, he's taking smaller and smaller steps as he gets closer to it.
In a totally different metaphor, you can just say "plucked all the low-hanging fruit already".
I disagree somewhat with Greg here, I think your use of the term is fine. It might be slightly inaccurate I suppose. the point is the cooling down effect you're pointing to, which simulated annealing also has. So depending on what you're emphasizing simulated annealing may be the best methaphor.
There are also (naturally) different forms of simulated annealing, but all simulated annealing over time lower the noise. So in the beginning it encourages big jumps and in the end it discourages big jumps. But the point of simulated annealing is to get closer to the global optimum, even if we're sacrificing some ability to approach the optima that are nearby (the local optima). So perhaps Greg was primarily responding to "you end up at some local optimum".
If you want another machine learning methaphor, you could use the idea of a gradient descent learning rate optimizer. Optimizers explicitly use the gradient to inform the step size of the gradient descent algorithm, so that when there is a steep gradient, the step size is high, and when there is a gentle slope the step size is low. This explicitly looks for the local optimum (which is less of a problem in higher dimensions). See this nice visualization from Sebastian Ruder: https://ruder.io/content/images/2016/09/saddle_point_evaluation_optimizers.gif
Of the two I'd say simulated annealing fits better.
> So perhaps Greg was primarily responding to "you end up at some local optimum".
Actually, I'd say the usage is fully correct. SA does not guarantee a global optimum - just a better chance of getting at it than naive gradient descent, and if not, a very good chance that you'll end up in a close-to-the-global optimum. Pretty much the same way that having exotic experiences and trying out many viewpoints while you're young gives you a better chance at having a more complete view of the world than if you just iterate on what your parents told you.
Agreed. although I do still think that that particular quote is somewhat incorrect as you don't end up at a local optimum in SA, and as you said the point is to get close-to-the-global. But really that's splitting hairs, and it's a good metaphor imho.
Just to add another voice, I think you used it entirely correctly and I'm a mathematician who uses similar optimization algorithms professionally (global optimization by intermittent diffusion most often).
I use the same exact example you used (figuring out your identity) when I teach SA, so I think your use was fine.
I disagree with Greg - simulated annealing is a fine analogy for the exploration processs you described
Well, just iterative optimisation: almost all optimisation algorithms take smaller and smaller steps when they converge to an optimum, that's what you describe and indeed also part of the human way to improve at something over time : as you get better, the further improvements tend to become smaller /slower.
SA is the opposite : it's to deliberately add noise to the steps of the iterative optimisation in order to decrease the chances of being trapped in a local optimum. The amplitude of the noise is adapted, depending where we are (# and /or size of iterations) and indeed goes to zero at the end (else you would never converge)...
But it's really an update to what Scott describe (a classic optimisation, done by a human but similar to most algorithm in the sense that steps get smaller when you converge to your optimum): SA would be for Scott to deliberately change his way in non - optimal, random, directions... Mostly when he start to converge (early stages are big and semi random anyway, late stage are the final convergence), but maybe when he clearly has converged (do not see how he could improve) but is somehow dissatisfied with the result.
That woud be SA in a context of self - improvement (and it's indeed something people do, in sport training for exemple when trapped in bad habits (which are in fact local optimums)
I actually think you're mischaracterizing it when you say SA does "the opposite" of what other iterative optimization algorithms do. SA is itself of course an iterative optimization, but when we contrast it with a naive hill climber, the point is that SA can jump from one slope to another, which it does less and less as time goes on.
Yes, it does this in terms of noise, but this doesn't mean it does "the opposite" of a naive hill climber, in fact I'm unsure what that would mean exactly.
> SA would be for Scott to deliberately change his way in non - optimal, random, directions... Mostly when he start to converge
You made a similar point before, you seem to think that SA makes the temperature go up midway through the search , but this is not correct: the temperature always goes down.
But all of this is more regarding SA technical specifics, while Scott's point is aptly described by SA, in fact taking your description here: "to deliberately add noise to the steps of the iterative optimisation (...) indeed [the noise] goes to zero at the end" This perfectly fits with what the metaphor is intended to say, as far as I interpreted it.
I mean that local optimizers (basically all of them, baring some initial landscape sampling and interpolation), takes smaller and smaller steps in the optimal direction (the gradient, when you have it (or have a reasonable approx of it), or the best among a few tested choices.
SA will add noise to those steps, making them non optimal or even not the best among the possible choice (i.e. the opposite of classic optimisation, it's deliberately choosing to do something different even if it is worse). And you select noise amplitude so that it does it in a way that increase the chance of jumping to a better optimum, without slowing down total convergence time too much (that's all the subtlety of annealing (simulated or not), you have to heat up at the right time and cool down at the right rate to get what you want).
SA (to me, but I work in the field, not neural network but classic engineering optimisation) is not an optimization algorithm per se, it's an ingredient you add to a deterministic optimisation algorithm, that will slow it down but decrease the chance of being trapped in a (not so great) local optimum. Basically you add it to a NR/Steepest descent or variants of it.
In genetic algorithms, SA equivalent would be the mutation rate. While what Scott describe would be the fact that initially, the less fit variants to be removed will really be much less fit, while later on, most of the population will have similar fitness. Mutation is an orthogonal ingredient, needed to ensure that you explore more of the design landscape (and decrease the chance of converging to a poor local optimum) but slowing down convergence to a single design....
SA and mutation are really counter-intuitive in the sense that in order to increase your chance to converge to the global optimum (or a good local one), you need to decrease your chance of being trapped in a bad local optimum, and a way to do that is to avoid to systematically change in ways that improve your results the most. That's the only way to cross local barriers that trap you in bad local optimums. But you can't do that all the time or too much, else you end up not optimising anything and just drifting...
I agree with everything here, especially your last paragraph. And now I see what you mean by doing the opposite. You don't mean the entire algorithm is doing the opposite, you mean it does at some steps accept a non-optimal jump, opposite of what optimization algorithms normally do. That makes sense.
And reading over Scott's description again:
> the thing where if you’re doing an optimization problem, you start by making big jumps to explore the macro-landscape of the solution space, then as time goes make smaller and smaller jumps
That really only captures iterative optimization as you said, not SA. Nevertheless I'd still say that the idea of SA as you describe it here really captures the point Scott was going for.
Indeed...I was surprised that so many people disagreed somewhat with me, while also showing they knew what they were talking about. Looking at the english and french version of the Wikipedia page, I start to understand why (I am french speaking): the pages are certainly not incompatible, but there is a subtle langage difference: In English, SA seems to apply to the whole optimisation process. In French, "recuit simulé" apply more specifically to the modification of classic deterministic algo, as a parametrised added noise (T is the parameter).
It's subtle, but explain why I got the simultaneous impression of dissagreing on SA but agreeing of optimisation description.
I still think the french terminology is more useful, as it emphasize the specific of SA compared to purely deterministic optimisation approaches, and is closer to the metalurgic annealing (a way to go out of the local optimum just after quenching by adding thermal noise, but controlled thermal noise to keep quenching benefit and not be back the the completely non-optimal original state.)
But hey, i'm french(speaking) so it would be hard to not disagree with english terminology ;-p
I just wanted to point out that SA may stand for both Simulated Annealing and Scott Alexander.
So Scott is saying that his bouncing around between ideas was a demonstration of simulated annealing. He was jumping about a lot at first, and found the neighborhood of his sweet spot, and then started jumping around much less.
What you are saying is that simulated annealing is the process of pre-programming larger jumps early on in the process of finding the global minimum. In some sense, humans, like Scott, are already pre-programmed to do that. Neural networks are not. That is why simulated annealing has to be programmed in separately.
I think Scott is using the term 100% correctly. Annealing literally means "cooling down", and it comes from the idea that hot particles jump around in the energy landscape, while cold ones are stuck in their local optimum. Yes, SA uses noise (controlled by the parameter T, which stands for temperature). But the name-giving feature of the SA algorithm is that you lower the temperature over time, which is exactly as Scott uses the term.
+
No I think he’s mostly right: in simulated annealing you start with a high temperature (in your teens I guess) that corresponds to a large probability of going uphill to a higher energy state, and then as the temperature drops over time/Scott enters his 30s (this is the annealing part) you converge to always taking downhill steps that essentially behave like gradient descent. Source: I wrote a paper once that used simulated annealing as a gradient-free method.
The intuition here is annealing metal: if you quench hot metal by sticking it in cold water then all the individual molecules drop into whatever low energy state is closest, so you end up with a very chaotic crystal structure. If you instead let it cool slowly then it has lots of time in the high energy state where big non-local changes are possible and so it can converge to a more uniform low-energy crystal structure as it cools.
Exactly. In fact, AFAIK (but metallurgy is not my speciality), it's not only cooling it down slowly, it's also re-heating up and keeping it at a moderate temperature for a time, then cooling it down back to ambiant at the correct rate.
It's clearer in french, where annealing translate to recuit, meaning to cook again, or re-heat. The full heat treatment would be heat to T1, quench, heat to T2<T1, cool it down slowly (time at T2 and subsequent cooling rate is important).
T1 then more or less slow cool down is not (AFAIK) annealing, and convergence to an optimum (which is another way to say your update steps get smaller and smaller) is not simulated annealing in an optimisation context
Your second paragraph describes hardening (T1+quench) and tempering (T2). Annealing is in fact the process of holding metal above a critical temperature (T1), then cooling slowly enough to preserve the maximally unhardened condition, for ease of machining (source: am a machinist).
I don't think that's correct, or at least is off target. The reason it's called "simulated annealing" is that you are mimicking the process of real annealing (in metallurgy) by gradually reducing the temperature, id est you gradually turn down the amplitude of your noise as you settle into the valley that (hopefully) leads to your global optimum.
It would appear your are critiquing Scott's use of "taking smaller steps" because as you (correctly) point out often the individual Monte Carlo step size doesn't change, and doesn't need to. But what he's getting at is that the net excursions (of however many steps you want to average over) become smaller and smaller as you turn down the temperature, and that is both the feature he is describing about his idea of the evolution of his ideas, and also the most obvious feature of simulated annealing simulations.
typically in simulated annealing, the noise decreases in volume as you approach the optimum, so what he said fits
Since you live in a rationalist group-home, I'm skeptical of your claim that not interacting much means you won't be affected much.
Maybe there's an actual name for the phenomenon I'm about to describe. I call it "new vs best".
When you write a new post, people tend to compare it to the best work you've ever done (Moloch, or whatever). Statistically, the new post is almost always going to be worse. So it looks like you've fallen off in quality.
But that's an unfair comparison. A fairer one would be to put one of your newer posts against a random SSC post from 2015 - if you do that, I'm confident that your newer writing holds up, and has maybe gotten better.
Another factor is that (in my opinion) things were actually more interesting in 2015. Take neoreaction. Whether you agreed with it or not, that was a fascinating thing that was fun to talk about. It's hard to find an analog for it 2022.
We live in a media landscape where 80% of the air is sucked up by COVID and vaccines and Trump. It's actually boring as hell and I can't wait for it all to end.
I've been calling that effect "Time Distillation", particularly in the context of comparing popular media quality now to that of past eras. Namely, a lot of popular media is mediocre-to-poor and always has been. But it often feels like the new stuff is much much worse than older stuff because the dreck of past eras has been disproportionately discarded and forgotten, as the passage of time has boiled off the impurities and concentrated the timeless classics in our awareness, while the contemporary dreck hasn't yet been distilled away.
Time distillation, that's an excellent and succinct way to name that phenomenon.
yeah that's a good one
The "golden age" of sci-fi -- Asimov, Bradbury, and Clarke -- all seem mediocre in comparison with the state of sci-fi in 2022. I can't imagine how bad the rest of the sci-fi must have been back then. But Orwell holds up. Michelangelo seems mediocre in comparison to corporate drones who build game levels today, so the masses of artists in his day must have been really atrocious.
The golden age of sci fi seems amazing comparing to the dumpster fire of 2022 era sci fi.
Mind you, relatively recent (1990-2010) stuff is still good.
Andy Weir and Neal Stephenson are making good stuff. Better than Asimov's robot/empire series at least. I'm just starting Foundation.
I suspect being better than one's contemporaries can start a happy death spiral that leads to seeming better than one actually is.
"Andy Weir and Neal Stephenson are making good stuff. Better than Asimov's robot/empire series at least. I'm just starting Foundation."
The Robot/Empire series is/was sorta retro-fitted. For early Asimov I'd go with:
*) Foundation, Foundation and Empire, Second Foundation (then stop)
*) I, Robot (short stories)
*) Caves of Steel
*) End of Eternity
Asimov is good at ideas. Not so good at writing actual humans.
Also, the "big three" of the golden age of SF are usually: Asimov, Clarke, Heinlein.
Bradbury is good, but only sporadically wrote Science Fiction.
The best Stephenson is better than Asimov ... Stephenson is better at actually crafting sentences and paragraphs and characters.
I'm reading The Dispossessed now, having read The Martian Chronicles (and a greater amount of Heinlein) and seen 2001 but not having read Asimov or Clarke. How does Le Guin stack up compared to them?
"Asimov is good at ideas. Not so good at writing actual humans."
Hm, Liu Cixin and the Three Body Problem (actually the whole trilogy) seem relevant. Amazing ideas, but all major characters are variations of murderous sociopath, or else comic relief, and female characters are either inept, evil or ridiculously idealized supernatural beings of ineffable beauty, but zero of his women are human.
Maybe good scifi is written by shape rotators and good characters are written by wordcels, or to reference an older SSC which posits the same dichotomy in different words:
https://slatestarcodex.com/2018/12/11/diametrical-model-of-autism-and-schizophrenia/
This may be the bottle of Riesling talking, but if scifi is a (or the) shape rotator genre, is fantasy a (the) wordcel genre? n=1, Tolkien invented several languages
It's really interesting to me that Andy Weir has become a well-known, high-profile sci fi author these days. I still remember him better from the days when he was still an aspiring author back when he was doing the webcomic Casey and Andy.
The final page of the comic ends with a joke based on the fact that an editor had rejected his manuscript for a story saying that it had "too few characters," and suggested that he add more to fill it out. He protested that it was a complete story without room to add more meaningful characters into the narrative, but joked about the possibility of adding another character named Bob to the story by occasionally finishing scenes with the line "Bob was there too."
Not that I don't think Weir is a good writer, I enjoyed his work before he was even published. But I do think his being catapulted to the limelight after years of writing as a relative unknown says a lot about how much that level of prominence owes to luck of the draw.
Asimov tends to write way too many characters, most of whom the reader neither knows nor cares anything about, and this is definitely a failure mode. JK Rowling seems to strike a good balance, with lots of characters but each character having sufficient backstory and feeling like a real person. Andy Weir goes deep with only a few characters. I don't mind the last two.
The Martian and Project Hail Mary are really really good. I don't think it's the luck of getting selected for attention by the media. I think they got selected for attention because they were that good. Same with Rowling. But when Rowling does Serious Adult Literary Shit, it doesn't seem that great. The coincidence is that Potter put her in the right mindspace to do her best work.
Agreed, golden age is still great to read because of nostalgia and founder effect (where new ideas are still plenty to grab and you get the originality bonus of being the first there. And not only bonus, the freshness coming from it is enjoyable ). But I prefer 1990-2010 for a more cynical approach, more complexity and much better literary quality (characterization, writing style,...).
Golden age I enjoyed Asimov, Herbert, Farmer, Pohl, VVogt,Silverberg,....
Later Banks, Stephenson, Varley, Powers, Simmons, King for example, and many others that do not immediately come to mind.
Lately, I do not read so much SF/fantasy...maybe because I get older, maybe because I have less time due to familial activities and so much access to video content, maybe because post 2010 authors sucks (for me). Not clear how much each factor is important, but given how much I enjoyed SF/fantasy before it would be nice to check if factor 3 could be corrected. Any modern author to suggest?
Same thing for me, I loved Asimov and co. as a teenagers but when I tried to reread them later I did not enjoy them that much, as the writing quality is not that great.
Amon recent SF novels, many people seemed to like the The Three-Body Problem trilogy of Liu Cixin. I Loved the Children of Time duology from A. Tchaikovsky and the Interdependency series from Scalzi (bonus, this one is quite funny!).
Thanks! the 3-body was on my to-read list, I didn't know about the others. In addition to the founder effect, there is also a deep resonance between the mood/obsessions of the time and SF/Fantasy. Quite apparent when you look at the golden age (fast tech progress, post WW2, cold war), and post golden age (hippie/drugs/sexual revolution, then early ecology and end of cold war). Maybe I have trouble with the current mood, which could be behind factor 3 (most of current SF sucks (for me))
"Any modern author to suggest?"
I like Becky Chambers (Wayfarers series) and Max Gladstone (Craft Sequence). Becky is SF (rockets, aliens) but only as a background. The stories are very character driven, and, realistically, not much happens that is critical. I think there is some similarity with the Firefly TV series (character driven, not really aiming to 'overthrow the galactic empire' or anything else) and Becky's Wayfarers series. If you like that sort of thing then I'd give it a try.
Max Gladstore's Craft sequence is more fantasy. But neither Epic fantasy ala Tolkein nor modern fantasy (e.g. Harry Potter or "So You Want to be A Wizard") but something else. Which was refreshing for me, though maybe it isn't as different as I think -- I'm not really current with fantasy.
In any event, the basic premise behind the Craft sequence is that gods are real (think Zeus or Aztec gods rather than Y*wh) and about 100 years ago human academics figured out how to do what the gods did. And there was a war. And mostly but not entirely the gods lost and so now we have parts of Earth ruled by gods and parts of Earth ruled by the folks who overthrew them. And you can go to college and learn the techniques to be on the 'overthrew them' team. But magic works a lot like contract law, so you can easily wind up DEAD if things go poorly. Complications ensue ...
Ninefox Gambit was interesting and weird, but seemed very classic SF. The idea was explored. The characters ... not so much.
Thanks, I liked firefly (the movie more than the serie), thought it was a very good free "real-life" adaptation of cowboy bebop (all the more since i recently saw the real-life cowboy bebop - yuck) so certainly it's worth a try. Craft also seems interesting, makes me think of American Gods which I like a lot (both the book and the tv serie), although it tends to dilute a little bit with time "à la Lost"....
So yeah, it seems there are still interesting things lying around....maybe it's more factor 1 and 2 that are at play, and having to find new prolific authors I can trust to reliably output stuff I like ( because quite a few of my preferred ones are very unfortunately dead :-( ), and I can not start with time-sorted classics like I did when I discovered SF.
I...wha...how....
Okay, what's an example of a video game level that's the equal of the Sistine Chapel as a work of art?
Assassin's Creed Unity (french revolution era) has some great art and architecture which is all based on real French stuff. It's not original but it's better than the Sistine chapel.
I think most of the individual paintings on the Sistine chapel are not great, but there are hundreds of them. Lots and lots of in-game character models look better. I also very often see stuff on deviantart or shutterstock that looks better than one of the extras at the Sistine Chapel.
If you manage to separate the historical\cultural baggage to make a purely aesthetics based comparison, It's not hard at all to find videogame art far superior to the Sistine Chapel. Just recently I was in awe of some of the stuff in Doom Eternal. I'm also a big fan of Halo's Forerunner architecture.
But I think most people aren't really willing to do that separation. Videogames are low status. Michelangelo is ultra status. Comparison is an insult in their minds.
While the greats of history were aiming for aesthetics rather than the sort of qualities that modern artists aim for, I think that measuring one work or another as "superior" has to account for the tools that the creator(s) had to work with.
Michelangelo had to work on his own, on his back, with physical paints dripping in his face, nearly causing him to go blind. The paints themselves also fade over time, and probably don't have the same color balance now that they had over 400 years ago.
It's not like he did things the hard way for extra credit, Michelangelo didn't even want to be pegged to paint the Sistine Chapel in his own lifetime. If he had been born in modern times, if he still decided to become an artist, he probably wouldn't have been a painter or a sculptor. He worked at the cutting edge of the media available to him at the time when he worked, and would probably have been happy to have toolsets with more expansive capacities which didn't get paint in his eyes.
On the other hand, the fact that it took years and nearly made him go blind probably has a lot to do with why the Sistine Chapel is so high status. Video game modelers don't put that kind of sacrifice into their work.
I agree with what you said(*), but I don’t think that Razorback was arguing that “Michelangelo is ultra status” is wrong, per se. I think he’s just pointing out that just because it’s ultra status doesn’t mean that relatively low-status stuff like videogames can’t contain better works of art, just as art. Yes, it’s “cheaper” art in a lot of ways (literally in the sense that many costs are less), but it can be better, and people really do sometimes confuse the two and even feel insulted.
I mean, Stonehenge is pretty fucking impressive, given who made it and how, and I really appreciate it, but it’d be silly to argue that you can’t find better works of architecture in low-status things today, including in stuff like Minecraft, or even public toilets or barns or something.
(*: well, I didn’t know the thing about nearly going blind, so “believe” rather than “agree” there.)
Most of Trine 2, when played in stereoscopic 3D on a 10-foot screen with popout.
Old scifi is bad compared to new scifi, but old fantasy (Tolkien) is much better than new fantasy.
I'm sure I'm being mega selective, but as a simplified model, it's interesting that the genre that's supposed to take place in the future keeps getting better, while the stuff that's happening in the past apparently peaked in the past, and keeps getting worse.
…speak for yourself
How about "The Classic Rock" effect. Because that's why Classic Rock station playlists are always 90% the same as they were 25 years ago. The old art we remember is, by definition, a "best of" soundtrack.
I’ve tried to explain this effect to people not nearly as elegantly. I’ll be using your term from now on!
Thank you for sharing this. I also will be using this term going forward
I don't think that's it, because there is a common sentiment that the 2013-2016 range contains not just Scott's best post but *all* his best posts, a trend too strong to be pure coincidence.
Don't you think the ivermectin one is up there?
I was thinking of including "(or nearly all)" to hedge against this kind of response and cut it for brevity.
Not a bad post by any means, but if you're into Scott more for his earnest insight than for his research the only section that really shines is the one about Martians.
When it comes to "hot takes" this may be true, but I'd say that his book reviews stayed consistently great.
The neoreactionaries were interesting when they were talking about bizarre game theoretic arguments for monarchism. But now they seem to have been seamlessly absorbed into the mainstream right, saying the same kind of vaguely racist stuff that any racist uncle will give you, but with longer words
The really interesting thing about Mencius Moldbug was that early on he was saying that the political right should just give up, because they can only be a phone opposition, thus forcing the left to act responsible. Then he changed gears and started talking about a "True Election" that could bring a President Palin to power. Less interesting, more conventional.
I think Moldbug and most of his neoreactionary buddies realized they could be ideologically-pure (and thus doomed to so much street-corner ranting) or they could be effective (compromising with the mainstream could give them a chance to try and steer the mainstream in a direction they like).
Is he really that effective? He's on a Substack now, and he has been cited by Greenwald, but I don't know that he can claim to have affected any policies (as Scott is saying with COVID above).
Well, I think it's fair to say that the Republican Party of 2016+ is a lot more Moldbuggian than the Republican Party of Mitt Romney or John McCain.
It's impossible to say how much credit (or blame, depending on your point of view) Moldbug personally gets for this, but I think it's fair to say he's been an important voice in a movement that for better or worse has opened a lot of young right-wingers' eyes to the idea that there might be more to right-wing politics than just "repeal Roe" and "tax cuts for the rich".
There has been a turn against the neocons (which Moldbug himself once supported). But I think that's more because GWB left a legacy of failure people no longer wanted to be associated with.
I think that trying to assert that the right-wing mainstream moving further towards Moldbug's worldview and Moldbug tacking further towards the right-wing mainstream as being wholly unrelated or purely just Moldbug "selling out" is a bit contrarian. Something doesn't have to be labeled "The Mencius Moldbug Bill for Installing the Monarch via Salami-Slicing" for it to be clearly influenced by the Dark Enlightenment he helped pioneer.
I don't think Donald Trump is familiar with any sort of Enlightenment. He just ditched unpopular Republican stances.
He's effectively monetized his bullshit, so there is that.
Dude should have just gotten cut a giant check for his Cathedral stuff and sent out into the internet to do more interesting shit, instead of having to pay bills.
I think Urbit was supposed to be his other interesting thing.
After the election Moldbug was posting about how if the Republicans were *really* serious about winning an election, they would have had state legislatures overturn the results and install new electors while Trump used the military to seize power. Which definitely sounds "interesting" to me, but I suppose it's more conventional than "we should have a king who secures his power with cryptographically-locked guns."
Consider the midwit meme - neoreactionaries seem to reach the same conclusions as really stupid people, but for very sophisticated reasons.
My main problem with neoreactionaries is that they tend to be such habitual contrarians they can't admit that mainstream sometimes is, in fact, right, or a progressive position actually makes sense. A lot of Moldbug's posts seem like a purely intellectual exercise in Devil's advocacy.
"Better than the Beatles effect" is one name it's been given. In statistics terms, it's just a trivial order statistics fact: the probability of the next sample being the maximum out of the n samples so far is just going to be 1/(n+1).
Since no one else mentioned it isn't this called regression to the mean.
It's more likely that anything new is going to be average rather than better than the current best ever
I don't think you suck, or that you have gotten worse. I've been reading you since, I believe, "The Toxoplasma of Rage," and that has been awhile. You have your hobbyhorses (AI risk, prediction markets, predictive processing), but hey, who doesn't? Like you, I've thought about the big high-level stuff, and know those debates as deeply as I want to. Grand pronouncements aren't needed. I'm here for the insight porn--give me that on any topic, any level, and I'll be delighted to read it.
This is where I am as well, long-time reader, no sense you've gotten worse. Also very glad to hear about the community-oriented projects. But it's your writing around mental health stuff specifically that brought me and kept me here, and that's you writing from expertise grounded in practice (in addition to all your other good qualities), which is a somewhat different place you write from than the other pieces you write.
Same for me! I have been a great fan of SSC since 2017 and read most of the older posts, and I am now a great fan of ACX. Yes there is some evolution in the content, which is great, it would be kind of worrying if Scott did not change at all!- but It is still fascinating.
This is probably not a very central case, but: my now-husband introduced me to SSC in 2018. He and I used to read aloud SSC articles to each other while hanging out on a Sunday afternoon. We are definitely NOT grey tribe / rationalist folk, but sitting down to read one of your articles always felt very cozy, like just hanging out with a friend who was earnest, thoughtful, funny, and way smarter than us. I still get excited when I see a new AXC newsletter, but the impression I'm getting is that the content is growing more niche, and more and more I find both it and the community around it a little alienating. Still net positive for me though!
> We are definitely NOT grey tribe
I am curious ... which tribe enjoys reading grey tribe in this case?
Bible-believing Christian, so red-tribe is probably the closest, though not a perfect, match (insofar as these categories apply to Canadians!)
I could have written this same comment and consider myself solidly blue tribe (though perhaps with a greyer tinge than some of my peers, possibly in part due to the influence of this blog). For me the biggest decline in quality has been the comment section. I'm looking forward to an improvement now that there's a report function.
Scott, I sincerely admire your brilliance and your achievements. I'm only posting this in response to your direct question.
I learned from you, and now try to practice as a life principle : let what I say be truthful, necessary, and kind. On this basis, I was dismayed by your jokey headline, "My Ex is a Shit-eating Whore". It didn't seem very necessary or kind.
I didn't write that document, Aella's ex did. Aella sent it to me, so I assume she's in favor.
Thank you for clarifying, and I'm sorry for making the assumption. I guess I missed the attribution somewhere? I will delete my post, unless you feel there is some value?
I was unaware of that headline, which was funny, so I vote don't delete.
additionally, if you delete it, these comments will become incomprehensible to future readers, and that would be annoying for them
From how it was presented, I also thought that Scott wrote that document, so I think it is good to have this clarification.
Thank you for the +1, I'm glad this has turned out to be a comedy of errors ;)
First everyone thought Scott married Aella. Then everyone thought Scott had previous dated Aella and called her a shit eating whore. What’s next?
... All I'm saying is, have you ever seen Scott and Aella in the same room?
Aella posted on twitter that she thought it was a "incredibly sweet ad" (or "very sweet" or something, not sure about the exact adjective). If she doesn't mind, I don't think there's much of a point complaining. Nate may have even asked her before posting it.
The attribution is confusing; the first instance of 'I' in the document links to the author's Twitter. (I thought it was better-attributed but that was because I already knew the context. Illusion of transparency, whoops!)
Also since it doesn't look like anyone else mentioned it, Aella is a sex worker and received a fecal transplant: hence, 'shit-eating whore'. I imagine all involved find it to be a cute joke, though I can totally see how it might seem in poor taste from the outside.
I thought it's yours and wasn't even surprised. I'm pretty sure everyone was low key shipping you with Aella for years.
For anyone else wondering, it was posted on the most recent Classifieds thread, and originates from here: https://twitter.com/Aella_Girl/status/1477784870822764547
It was funny and complimentary. Did you read it?
Hi George, it sounds like you're directing that question to me..? Yes of course, I did read through a couple of times in sheer disbelief, and cringing all the way through, in my mistaken belief that Scott had written that "review" of Aella.
Thank goodness this imaginary drama had a happy ending, so to speak.. ;)
You're great, your blog is great. People just want more and better of every good thing.
One other factor you didn't mention but I think was a factor in my own perception of slowed insights was that when first reading ACX I got to consume your best written and most insightful posts of the last 7-8 years in 3-4 weeks. Twice per week I got to read one your top 10 posts. But now that I've caught up, these posts only come once a year which definitely feels slower by comparison.
On this point specifically, I wondered if there was any appetite for reposting some of your 'greatest hits' (maybe even from the pre-SSC days) here on ACX, so the community can enjoy reacting to them in real time
For what its worth I haven't noticed a decline in quality, and for example the ivermectin article was probably as good and more important as anything you wrote in the 2013-16 period you cite as being a high point.
I would greatly enjoy this - I found ACX in 2021 and giving some air time to the "Greatest Hits" dating back to pre-SSC would be wonderful. Please consider doing this, Scott, as you are so prolific I'm absolutely certain I've missed many gems.
In case you don't want to wait for Scott to do this, you can always browse some of the older compendiums of SSC +LW articles compiled by other people; my favorite is "The Library of Scott Alexandria", which attempts a categorization of sorts:
https://nothingismere.com/2015/09/12/library-of-scott-alexandria/
I would like that a lot. Especially if they were edited/updated, either throughout or with post-scripts, on how Scott's views have evolved since the article was first written.
Same! Maybe we could call that ‘the netflix effect’? Or perhaps the reverse of the writing quote scott mentions
Same for me. Falling down that particular rabbit hole ("The hammer and the dance" was my entry point) and seeing how much there was to discover, and how it resonated, was an intellectual rush. So, first crush, rose tinted glasses, lifelong hopeless romantic - or, if you prefer - the elusive chase for that feeling of the first high (not that I would know about that, though)
Interesting! I've been reading Scott's blog since it was a LiveJournal, so never got an intense rush of great posts - it's always been a slow drip. From my perspective, the quality hasn't dropped off at all; in fact, when I went back over his recent posts to pick my favourites for the ACX reader survey, I was amazed how many top-notch posts there had been since he started the new blog.
I wonder if many folks in your audience are at similar places on their developmental timelines as you, and are projecting changes (less excitement) they feel about themselves onto you.
A totally different theory, I wonder if the hiatus you took after the NYT brouhaha actually undid some optimizations, and you're finding your way back to the local minimum. (In other words, you were a bit rusty for a while.) But this doesn't match my impression of your writing.
Finally, you also have other things going on in your life. People famously get a bit more boring after they get married and have kids.
I'm blogging a lot less than I was when I started out (my first post was in September 2007), and I'm neither famous nor do I have interesting things going on in my life. On a related note, I went through a stretch where I wasn't reading nearly as many books as I used to, but now I'm trying to shift back (which is where the material for my blog posts comes from now that there's fewer other blogs to write about).
I found out about you from the NYTimes brouhaha. I don't think you suck. For example, I really like the use of the word 'brouhaha' there.
haha...bro(u?)
Remember the somewhat rambling post about the cliche where various colored pills gave superpowers and you fleshed out a world where certain people took those pills and that BRUTE STRENGTH won the day upon the heat death of the universe? That was such a fun thing to read, and the kind of thing I'd totally read more of even though it wasn't particularly intellectual. (I love the intellectual stuff too, but those fun posts are the kinds of things I think people are missing)
+1
Anglophysics was also good. Love me some Weird Scott Fiction
+1
I also greatly enjoy the Scott Fiction, and would appreciate more of it.
(In the spirit of recommendation, I found In The Balance to be a very fun one)
+1
Googling “red pill” leads to FASCINATING OPINIONS. I liked how the story ends with galactic civilization being saved in the dumbest, rules breaking way possible
I think about Universal Love, Said The Cactus Person probably...2-3 week, every week, so smart and silly and delightful
+1
+1 Loved Unsong. Just pivot to writing a web serial.
Oh yeah
+1 these were actually the posts that I enjoyed the most and it feels to me that they have gotten less frequent
what was the title?
It was "...And I show you how deep the rabbit hole goes"
https://slatestarcodex.com/2015/06/02/and-i-show-you-how-deep-the-rabbit-hole-goes/
Perhaps this is tangential to what you're writing here, but... I have to actually write this out at some point, but while I've always had my issues with the rationalist community, when it was a smaller niche it was always rigorous. I could always expect real grappling with evidence and an acknowledgement of the complexity of the world. And while I can't say that any individual has changed for the worse (and am not accusing you of this), I think that as the community has grown it has become, for lack of a better term, a meme community. By that I mean that the larger rationalist community seems to me to be more and more defined by a collection of REFERENCES rather than a mode of thinking. So where once a reference to motte and bailey was taking advantage of a useful acrostic for beginning a conversation, one that recognized that there are limits to those kinds of metaphors for thinking, now that point is merely to say the term to indicate insider status. It's a devolution into magic words philosophy, where people launder incuriosity through these terms and ideas. The holy texts cease to be invitations to complicated conversations and become instead places in which to hide, intellectually.
The thing is... I don't think there's any way that an intellectual tradition like rationalism can grow without that happening. It's an inevitable artifact of getting more popular. There's still tons of great and stimulating conversations happening under the banner. But part of my reservations about Julia Galef's book lies in this seemingly unavoidable consequence of broadening the appeal, the tendency to fall into "one weird trick" approaches to critical inquiry.
For the record I don't think your work is any worse than it has been in the time I've been consuming it. I do think the commenting community reflects the meme philosophy I'm talking about, sometimes, though I can't pretend to be a very rigorous reader of the comments.
My own feeling is that this stuff was already in a pretty advanced state of memeification by 2015-2016 when I first become aware of it. Maybe one way of looking at it is that different corners of a movement will have different balances of memes and real content, and you have to be alert to that when you are deciding where to hang out.
What's your 'sampling method' for measuring "the rationalist community"? I don't think the commenters here are a representative sample. (I don't think there _is_ an easy way to find a representative sample anymore. The (original) 'community' dispersed years ago.)
Personally, I say (half-jokingly) the rationalist community died when I said that the bailey was not (as Alexander said) the good land around the castle. It was a fortified courtyard. No one ever had fields in a bailey. It was part of a defense in depth strategy. The motte was a high tower that was purely a defensive structure. You'd then have a bailey (or better a series of baileys) around it connected by defensible ramps or bridges. So while it was a cool metaphor it was not all that accurate.
The person responded it was a metaphor and I was being pedantic with a tone that I'm very familiar with. That high school tone that says: You may be right. But you're uncool for being right. Your rightness makes you not one of us. I wasn't upset but I was disappointed in how utterly uninteresting and predictable the response was. There is now a rationalist clique and I was being told that if I don't get on side I'm going to have to go sit at a different table.
I don't blame rationalists for being like this exactly. But it isn't what I come here for.
Thanks for clarifying, because the conflict between the internet-rationalist definition of a metaphorical "bailey" and the actual castle "bailey" had always confused me.
Me too!
Me three!
Yeah. To draw it out completely: Scott wrote a piece repeating a work by Nicholas Shackel. Shackel said that said the motte was the castle and the bailey was the valuable farmland around the castle. Old feudal lords would (according to him) have their peasants farm in the bailey. When raiders came by everyone would go hide in the motte and rain arrows down on them until they left. Then they could get back to farming.
Motte and bailey is when you claim a huge but useful field that's epistemologically indefensible. If someone challenges you then you retreat to a smaller but more defensible claim. Then once the confrontation is over you go back to the wider but indefensible claim that's useful.
It's a useful enough concept. But that's not what a motte or a bailey is.
Side note: I'm often surprised by how little people know about castles considering their gigantic cultural imprint.
I wanted to dig into this a bit more.
First let's look at Shackel,
"A Motte and Bailey castle is a medieval system of defence in which a stone tower on a mound (the Motte) is surrounded by an area of land (the Bailey) which in turn is encompassed by some sort of a barrier such as a ditch."
Now Scott, referencing Shackel,
"The writers of the paper compare this to a form of medieval castle, where there would be a *field* of desirable and economically productive land called a bailey, and a big ugly tower in the middle called the motte. If you were a medieval lord, you would do most of your economic activity in the bailey and get rich."
Field here might mean farmland.
Now Wikipedia,
"The bailey would contain a wide number of buildings, including a hall, kitchens, a chapel, barracks, stores, stables, forges or workshops, and was the centre of the castle's economic activity."
Scott seems pretty close here, except maybe he intended the word field to imply farmland and that would be wrong? Although, see Carisbrooke Castle from the Wiki, where farming fields seem to be depicted as being inside the bailey.
Unfortunately the description of action (running away from raiders) firmly cements that in their minds the bailey was outside the wall. Otherwise they'd be implying the proper thing to do in an invasion was to run inside the walls then give up the outer layer of defenses and run further inside.
That said the metaphor isn't awful because, terms aside, it is an accurate description of the purpose of a castle (especially one with a large bailey) relative to the farmlands around it. It just gets the names wrong.
I agree that representing the bailey as farmland is not accurate (after all, walling in your farmland is known as "a border wall"), and low-intensity farming is obviously the vast majority of economic activity in mediaeval times. But I wouldn't say that the bailey's necessarily completely unproductive either; it *could* be in some castles, but AIUI frequently things like horticulture and markets were inside the bailey walls and those are certainly of high economic value *per acre* (as opposed to total).
I think the error was introduced by Scott - if Wikipedia’s page is to be believed, the original formulation was by Nicholas Shackel, and his definition seems to get the definitions of motte and bailey mostly right.
I think the metaphor works just as well if you use the correct definition of bailey - either way, it’s a broader, more useful, but harder to hold area than the “insalubrious but defensible, perhaps impregnable, Motte”.
I think I knew what the bailey was from the beginning and it seemed to fit the metaphor, perhaps because I was thinking of it geometrically, the core and the area around it. If Scott at some point defined it as the good land I don't think I noticed.
The metaphor is directionally accurate. The motte is a purely defensive structure. It is surrounded by the bailey, which has a lot of actually useful and productive stuff and is less well defended than the motte. In the face of a sufficiently severe attack, you abandon the bailey and retreat to the motte until you've driven the attackers away.
Claiming that the bailey is farmland is technically incorrect. And if you're not e.g. using that technicality to derail a substantial debate over something else, you should get nerd credit for pointing out the technically correct version. Not some smug "go away troublesome outgrouper" response.
But, directionally accurate is about all you can expect from a metaphor, and once one is accepted into the language or jargon, meh, forget it, Jake, it's Chinatown. Well, a few miles north of a Chinatown.
I find it a little odd that you let one person who said that to you in that manner, kill the rationalist community in your mind. There's always assholes everywhere which I assume you are aware of so I'm guessing I don't understand your point?
I did say half-jokingly. The non-joking half is that such a statement doesn't happen in a vacuum. The fact the person even tried implies the existence of the cliquishness I'm put off by. And I've not seen anything to change my view subsequently that, to use the in group language, the ideology is not the movement.
“That high school tone that says: You may be right. But you're uncool for being right. Your rightness makes you not one of us.”
Not saying that phenomenon doesn’t exist, but I don’t think that’s what happened here. What happened here is that you were technically correct but missing the point. You were letting pedantry wreck a useful metaphor without providing an alternative with the same level of utility.
Nobody likes a pedant - not because being correct is uncool, but because having someone constantly show off how smart they are by “well actually-ing” trivial points makes it harder to have productive/interesting conversations.
I think this depends a bit on Erusian's intention, if they simply wanted to convey a neat history fact, and then got slapped down I could see that being an unpleasant experience. If their point was that the metaphor was bad because of this detail, then that is literally pedantic. Even assuming the former though, I think it is possible that the person who replied to them assumed the latter.
So, this interaction doesn't move me much in either direction.
See, this is a cliquish attitude. The underlying assumptions here are:
1.) The tribal totems of the group (such as a specific metaphor) are more important than literal correctness. You're also inventing a scenario which makes me wrong. That is a somewhat defensive reaction and seems soldier-y.
2.) I deserve to be socially punished (someone "nobody likes") for challenging a commonly held but false belief among the group. (In this case, the definition of bailey.) If I am going to correct it I must leap through certain hoops to make the criticism acceptable, effectively requiring a loyalty test. ("providing an alternative with the same level of utility.")
3.) If I want to be part of the group I need to accept group policing on what is trivial and what is central. You're telling me that using a word in a wrong way (which confused at least three people) is unimportant. I don't think so. But you, as a member of the community, have decided against me.
These are all important things for maintaining group cohesion! But they're not rational. They make you more wrong, not less wrong, in the interest of collective wrongness creating group cohesion. That's literally true here: you're using a word wrong.
I agree that this is a cliquish attitude, but I'll disagree with your analysis of the use of 'motte and bailey'. Since it has become a standardized term of art (or a cliché, if you prefer) its meaning has shifted, and its etymology has become irrelevant. That's what happens in language. You bolster your arguments even without the use of a pillow, and you police language even without the use of a badge and a gun. Once something is metaphorical, it cuts itself loose from its original meaning and wanders off on its own. We may not like it, but we can't change it, and complaining about it will have no effect.
End of sermon. I agree that there's too much grumpiness on this blog at the moment, by the way, and I'm trying not to be grumpy about this.
I think the issue here is twofold: Firstly, people who are familiar with the normal definition word will be confused by the new definition. Secondly, it will misinform rationalists as to what the word means. Basically, it cuts off two linguistic communities by making them unable to communicate unless the full context is understood. But I have no objection to words changing per se.
Etymology may not matter for words, but I don't think etymology ever becomes irrelevant for a phrase that's a metaphor. For instance, think of the phrase "toe the line" -- it means to follow the rule precisely. The phrase means to metaphorically do the equivalent of what athletes do at the start of a race, when they bring their feet so close to the starting line that their toes touch it, but don't go over: they toe the line. Now some people write the phrase as "tow the line." Everybody still knows the phrase means follow the rule precisely, but the metaphor is lost, because the phrase is mostly senseless, and what sense it has has nothing to do with precision and rules. You could just as well declare that "blubbering turd" means to follow the rules precisely.
I want to first say that 1) I am not part of the rationalist community, and in fact probably someone who's about as far away from the rationalist community as possible in geography, mindset, and social class, and 2) I really want to phrase this in a way that's less confrontational but am having an extremely hard time doing that right now. With that said, this seems like a fully-general argument against engagement with any culture or social group or any language that does not have perfect and immutable sign-signifier linkage built into it (such that metaphor, analogy, terms of art, etc. are considered to be as severe a violation of grammar as me said this sentence). I'm not quite sure what a "non-cliquish" or "perfectly rational" interaction by your standards would look like. I'm not sure it would even be possible, even in Raikoth.
Not really. It's extremely simple. You have to define your terms. It's like the famous example: π < 2. Now, you can read this one of two ways. Obviously wrong if you take π to mean (as it commonly does) 3.14 etc. Or you can, in context of the equation, realize that π is being used as a variable that's about 1.3. And then someone can say, "You know, it's a little weird using π to mean anything other than 3.14." And the other person shrugs and says they like to use it. And everyone understands what's going on.
Now, if the person said, "No, it's the same π that is used to calculate circles and it's less than two." then they'd be incorrect. (Or have some mathematics above my pay grade to back it up.)
Yes, but if people need to engage in heavy circumlocution about every single word, you're going to end up setting the barrier for entry to communication so high that people (at least, the kind of people who exist here and now as opposed to some New Rationalist Human) just are going to not engage in it- at least, not with the people who insist on setting that barrier up. The important thing about communication is that the root idea is conveyed, not that every word perfectly aligns with what is written in a dictionary. That isn't a "cliquish definition", that's just how language actually works; cavemen didn't have dictionaries to point at when they needed to communicate the idea "A lion is coming, we need to leave or it will eat us." As others have pointed out, "motte-and-bailey" conveys a complex idea in three words, and does that regardless of whether bailey means exactly what the man who coined the term thinks it means, and in fact still works as an analogy even with the meaning of bailey you're insisting be acknowledged. The idea that the whole phrase is poisoned because of one minor factual error is, essentially, throwing the entire nature of language out with the bathwater.
1) You’re asserting your own totem here: literal exact correctness is the most important thing, and anyone who disagrees with you is “cliquey”. That’s a very weird attitude to have about a metaphor, of all things, and frankly pretty exclusionary on your own part.
2) I’m not saying you deserve to be punished, I’m saying you don’t deserve the praise you’re demanding for pointing out a correct, interesting, but not really relevant factoid. You accuse me of being “soldiery” but honestly you seem to be the confrontational one here. You seem to demand some sort of prize for superior intellect, and having failed to receive that, decided “you all are cliquey losers, I’m taking my ball and going home”.
As for hoops to jump through - no one is saying “you can’t be a rationalist if you don’t believe this incorrect definition of ‘bailey’” they are saying “motte-and-bailey is a useful metaphor for thinking about a common rhetorical strategy, this is true whether or not it gets ‘bailey’ exactly right, so we’re going to keep using it”.
3) Nobody is “policing” you out of the group. Nobody even really has that authority, least of all me. You’ve voluntarily decided to leave, because you can’t stand the fact that the name of a metaphor representing a rhetorical concept sort of relies on an incorrect definition of one of the words in the name of the metaphor.
I’ll add to that that even using the correct definition of “bailey” doesn’t change the metaphor - either way, the “bailey” is more generally useful, the “motte” is more defensible. That’s the only bit that’s relevant to the metaphor, the rest is just color.
So yes, I stand by my assertion that the distinction is trivial (for the purpose of discussing rhetoric. It clearly is not trivial for studying medieval settlement construction).
A key point from Erusian was that "you're also inventing a scenario which makes me wrong" and I'm inclined to agree. You did not witness the exchange; you are choosing to interpret Erusian's description of the exchange in a way that makes him look bad.
As an aspiring rationalist, I'm inclined to think you're hurting the reputation of rationalism with the approach you're taking. (even if you don't identify as rationalist.)
> Nobody likes a pedant
Hmm, I think I like pedants.
The bailey was kind of a fortified patio, but it was also a prison for serfs kept safe from bandits by the lord of the motte so he could profit from oppressing them. Um. I think of the metaphor as a polite way of saying 'bait and switch' without calling the other person a liar. And I think it works pretty well. It makes arguments more reasonable without harming any actual serfs or their lord's economic interests.
. . .
I don't think Scott's quality has changed much, but the comments section used to be a lot more right-left confrontational. If that comes back the place will probably be purged.
Scott gives his (or Shellack's) definition of bailey. It's "a field of desirable and economically productive land [around the motte] called a bailey." You go into the motte to avoid attacks and then go back to farming in the bailey when the raiders are gone. So it's clearly a mixup.
I'm just going to say it. Scott would make a bad feudal lord. Any lords looking for advice on how to run their fiefs should read someone else.
For years I thought that a "motte" was an olde Englische word for a "moat", and a bailey was either where the bailiff lived, or another old word for a "jail" or a dungeon. A bailiff is a kind of jailer or dungeon-keeper, right?
So when your warriors could no longer defend the water-filled "motte", they retreated to the strong stone "bailey" for a last-ditch defense. Which was precisely backwards.
False friends everywhere!
Motte and moat actually were the same word. Motte literally means "hill" or "mound" but was extended to mean the earthworks (both hills and trenches) around a fortification. Eventually hills became less common and ditches filled with water became more common and so the motte came to refer mostly to a ditch filled with water. Now we say that a motte is the hill and the moat is the wet fosse. But they were the same word.
A bailiff is the person in charge of a bailiwick. A bailiwick is literally "a handed over/carried house." A bailiff is literally a carrier or bearer. Implicitly of something handed over. A porter in some cases. Wick means a home/house/village and therefore metaphorically an estate. So a bailiwick is a handed over estate. And a bailiff is the person in charge of it. But not the owner. This role as the person in charge but not the actual owner later created a royal class of officials as the king delegated various duties to the people he appointed bailiffs.
It's a false friend with bailey which probably derives from the Latin vallum meaning a type of wall (related to "poles, stakes" as in a palisade). A bailey is literally a walled area. Though some people think it might be related to the other word. However, bailiff is not the term for someone in charge of a bailey. It seems like in most of Europe the person in charge of a bailey was in charge of the gate that let people in and took something like that for their title. (Though, confusingly, sometimes bailiffs were in charge of what were called baileys! "Handed over things." And it wasn't uncommon for one person to hold two positions.)
great explanation, thanks
I've always found the motte and bailey notion opaque. Which one is more valuable? You could argue either way.
It seems more like a symbol of group membership than a useful concept.
Steve is working on a riff about getting your bait-and-switch switched to bait you.
I've seen people refer to it as field and fortress.
> It's a devolution into magic words philosophy, where people launder incuriosity through these terms and ideas.
Aren't you a literal communist? Pot, this is kettle, I've got some news...
you know nothing about Marxism, and thus have no basis for making the comparison
Ok I know something about Marxism, and i think Cassander is right, but also your post does not in any way contradict this: "I don't think there's any way that an intellectual tradition like rationalism can grow without that happening. It's an inevitable artifact of getting more popular." The pot isn't calling the kettle black, the pot is pointing out that this problem is unavoidable....
How do you know Cassander doesn't know anything about Marxism?
Cassander is a rather prolific commenter in the rationalist community. My strongest association with Cassander is probably 'pro-capitalism' so I wouldn't be surprised if Freddie has read plenty of posts by Cassander about Marxism in the past.
This is such a pitch perfect response. Not even an attempt at an argument, just immediately lashing out with one of the oldest and laziest of ad hominem cliches. You've proven my point better than anything I could ever say.
And *your* attempt at an argument-that-was-definitely-not-an-ad-hominem-cliche was...what, exactly? What further response is called for when your comment was basically just "Says you, commie"?
> you know nothing about Marxism, and thus have no basis for making the comparison
He made a reasonable point, and... what did you even do here? Is this just a personal attack?
He didn't "make a reasonable point," he just said "you're a communist, therefore a hypocrite, therefore I have verbally owned you by pointing these two things out." It's an anti-point.
Pointing out that communism is known for the same lack of curiosity that FdB is accusing rationalism of is interesting to me. I would have enjoyed a substantial response.
If Cassander made a point about the intellectual habits of communists, that would be one thing, but he didn't. He *implied* the existence of such a point as scaffold for a textbook ad hominem tu quoque ("you do X therefore your criticism of X is null").
Not saying Freddie's response was *ideal*, but Cassander's initial reply was definitely "coming in swinging".
Is it known for that though or just "known" for that. Despite being a bit left of the average reader of this blog (but definitely right of Freddie!) I'm actually not a fan of the reasoning style of most of the communists I've encountered. And whilst I've not read Marx, my prior that he is right on most of whatever claims divide him from the average academic economist right now is low. But just stating as a fact that your ideological enemies are especially group-thinky is both question-begging and not really very useful.
This isn't a reasonable or even a point at all. Just look at their next reaction, a classic "not even an attempt at an argument..." - When he started with a lazy "har gar communist is magic". A pot and a kettle indeed.
This comment may seem rude, but in Freddie's defense Cassander's comment was equally uncharitable.
Trivial warning for this, no ban in site but please try to be less confrontational in the future.
I think that Eliezer intentionally baked memification into his Sequences from the beginning. It's part of why I never actually managed to read them - I just couldn't stand the excessive appeal to pop-culture versions of martial arts and of Zen.
This isn't to say that you're wrong, but, well, the snake was in the garden from the beginning, you know?
The “Sequences” are so bad and Scott is so good and Scott loves the “Sequences.” I can never quite wrap my head around this recursive inconsistency.