289 Comments

> How is your Lorien Psychiatry business going?

Now that you've partially bought into the capitalistic model of writing, you can bring that to your psychiatry business too. Slowly open it up to everyone who wants to join, raise subscription fees as needed (on new patients), hire additional psychiatrists, hire additional staff, etc. I'll admit this also sounds like a lot of work but maybe you can find a cofounder or the like to help.

Expand full comment

Is Lorien entirely based on statically monthly fees? I assumed that psychiatry would bill hourly or by session, is that not industry practice?

Expand full comment

I love the self awareness around podcasting.

People should definitely stick to the best medium and format for communicating their ideas.

Expand full comment

> Partly because patients keep missing appointments and I don’t have the heart to charge them no-show fees.

I have no experience in medicine. I have some scattered experience in what is technically independent business but never succeeded at it particularly and so maybe listening to me about that is a bad idea. I'm not sure what your actual policy is so I could easily be misinterpreting this. But the wording of this combined with my read on your personality overall makes me think you could use the push, so:

This feels like a classic trap. Letting people take up unlimited designated slots without paying you is a way to make sure at least some of them will allocate them without really caring. Your time is scarce, and other patients need it too. This is modulated somewhat by how much ability you have to *move* other work into those slots on short notice, of course—not all types of businesses have the same economics here—but medicine seems likely to be among the types that's more harmed by this.

Any partial but meaningful barrier is way better for aligning incentives than zero. Halve the fee for a no-show and be willing to waive one every six months, or whatever. But don't just let bookings turn to inefficient “first come, maybe serve” hell for no reason! Your market sense is better than that; I know it from your posts.

Disregard any or all of this if you have more reliable feedback or more specific analysis that contradicts it.

Expand full comment

Should we have a Straussian interpretation of your answer on whether you write with a Straussian interpretation in mind?

Expand full comment

> what if this is the only book I ever write and I lose the opportunity to say I have a real published book because I was too lazy?

For what it’s worth, when I published Land Is A Big Deal, I picked the laziest option possible, going one step up from self publishing, by going with a small press operated by a friend. Approximately nobody noticed or cared it was from a small press, and now some big press folks are asking me about publishing the second edition with them.

I could have tried to go for a "big" publisher right from the get go, but I figured that would involve too much work and editing burden and rigamarole, which would ultimately result in me just not doing it in the first place. I did have some edits that came from going with the small press instead of literally just YOLO'ing it with self publishing, but it wasn't too bad all in, and was mostly just "make this sound like a book and not a series of online blog posts," which presumably wouldn't really apply to UNSONG.

One protip I would offer, however -- don't commit to recording, mastering, and editing your own audiboook. That was easily 10X the work of everything else, ugh.

Expand full comment

Speaking as a publisher: why don’t you ascertain how much editing the editor wants to do?

Expand full comment

Tedious is the perfect word to use for it.

Expand full comment

> If you ask me about one that I have written a blog post on, I’ll just repeat what I said in the blog post.

As podcast lover, this is exactly what I expect when I listen to podcasts. For many topics I just don't have the concentration to focus on reading a long deep dive of a blog post, but I would be happy to listen to people talk about the topic in the background while I do some light work.

No one expects the guests to come up with new insights during the interview, they just need to broadcast their usual points to a new audience.

Expand full comment

Re: Unsong, as someone who thinks it sounds interesting but also prefers to read old-fashioned paper books while sitting in a big comfy chair, I'll just say that I'd happily buy a copy on day 1, regardless of how you choose to publish it.

Expand full comment

> My post My Immortal As Alchemical Allegory was intended as a satire to discredit overwrought symbolic analyses, not as an overwrought symbolic analysis itself.

Absolutely devastated. Also, weirdly enough I was just looking up Issyk Kul a few days ago. This is not a coincidence, because nothing is ever a coincidence.

Expand full comment

> If we learned that the brain used spooky quantum computation a la Penrose-Hameroff, that might reassure me; current AIs don’t do this at all, and I expect it would take decades of research to implement

Do I have some news for you: "Scientists from Trinity believe our brains could use quantum computation after adapting an idea developed to prove the existence of quantum gravity to explore the human brain and its workings." https://www.tcd.ie/news_events/articles/our-brains-use-quantum-computation/

Expand full comment

You could always do a podcast as Proof of Concept of "Why I should not do podcasts so please stop asking me".

I would find the "failure" interesting in a meta way.

Expand full comment

Not doing podcasts is totally fine, but you're really overestimating the level of quality listeners expect from a podcast. For example, I genuinely enjoy listening to Joe Rogan talk for hours.

Expand full comment

> I would also be reassured if AIs too stupid to deceive us seemed to converge on good well-aligned solutions remarkably easily

We have "a lot" of evidence for the contrary.

I think there was a Google doc from some Less Wrong community member with like 50 anecdotes of AI become misaligned on toy problems. For example, ask the AI to learn how to play a given video game very well, and the AI instead find bugs in the game that allow it to cheat.

I can't find that google doc anymore (maybe someone else can resurface it), but this PDF lists some of the examples I recognize from that google doc. https://arxiv.org/pdf/1803.03453v1.pdf

Expand full comment

I've been meaning to ask, can you enable the "listen" text-to-speech feature on substack? Maybe that will satisfy people's podcast desire. You may have to email them, or it may be an option in the publishing view

Expand full comment

“But what if the podcast interview is presented as rounds of perfect and imperfect information games to give the audience insight on your thought process?”

Given no humans are natural public speakers, it’s quite odd how many folks expect writers to be capable and willing to instantly transform into public speakers. There’s a whole second skill set in addition to the speech-writing/ab-libbing part.

Expand full comment

Silly Scott. The reason people want you on a podcast is they just want to hear you talking. To hear what your voice sounds like, how you are as an extemporaneous person, etc. It's not actually to get exclusive insights.

Also, I think the thing about being cancelled for going on a podcast is a bit overblown. Most podcast-goers are fine and guilt-by-association is rare. You can also even get the questions mailed to you beforehand for the more scripted ones.

Expand full comment

Prediction: Scott Alexander on Sean Carroll's Mindscape podcast, talking about his newly published best selling fiction UNSONG as an allegory for everything else. Some day.

Expand full comment

None of the arguments about not doing a podcast make sense. Of course there’s a better subject matter expert for each thing! But if that was all the substance then I would not bother reading you, would I?

Better to just say you are particularly shy and don’t want to be recorded in that way.

Expand full comment

> I would be most reassured if something like ELK worked very well and let us “mind-read” AIs directly.

This probably most of all. All the other solutions attempt to understand indirectly and subjectively whether an AI is aligned. To date, we have a ton of much simpler ML models whose outputs we only have a theoretical understanding of how they happened. If we could understand deep learning layer-by-layer, and understand an AI's capabilities and intentions before we turned it on, that would be a completely different world. It's also a really hard problem in its own right.

Expand full comment

> A final way for me to be wrong would be for AI to be near, alignment to be hard, but unaligned AIs just can’t cause too much damage, and definitely can’t destroy the world. I have trouble thinking of this as a free parameter in my model - it’s obviously true when AIs are very dumb, and obviously false five hundred years later...

It's really weird for me to read statements like these, because your value for "obvious" is (apparently) totally different from mine. It is obvious to me that we have extremely smart and powerful entities here on Earth, today, right now. Some of them are giant corporations; some of them are power-crazy dictators; some of them are just really smart humans. But their power to destroy the world appears to be scaling significantly less than linearly. Jeff Bezos wields the power of a small nation, but he can't do much with it besides slowly steer it around, financially speaking. Elon Musk wants to go to Mars, but realistically he probably never will. Putin is annexing pieces of Ukraine left and right, but he's a local threat at best -- he could only become an existential threat if all of us cooperate with his world-destroying plans (which we could do, admittedly).

And adding more intelligence to the mix doesn't seem to help. If you gave Elon Musk twice as many computers as he's got now, he still would get no closer to Mars; if you gave Putin twice as many brilliant advisors (or, well, *any* brilliant advisors), he still wouldn't gain the power to annex the entire world (not unless everyone in the world willingly submitted to him). China is arguably on that trajectory, but they have to work very hard, and very slowly.

It's obvious to me that the problem here is not one of raw intelligence (whatever that even means), but of hard physical constraints that are preventing powerful entities (such as dictators or corporations) from self-improving exponentially; and from wielding quasi-magical powers. Being 100x smarter or 1,000x or 1000,000x still won't help you travel faster than the speed of light, or create self-replicating nanotech gray goo, or build a billion new computers overnight; because such things are very unlikely to be possible. It doesn't matter if you're good or evil, the Universe doesn't care.

Expand full comment

Podcasting (i)s a form of media almost perfectly optimized to make me hate it. I will not go on your podcast. Stop asking me to do this.

OK. But tell us how you really feel about podcasts and being asked to go on one... :)

Expand full comment

I think my imagined podcast is very different to what Scott is talking about (the podcasts I listen to are basically all BBC Radio 4 shows, so produced radio rather than a guy with a mike, a guest and little editing) but I think he's missed what he'd be on to bring – an interesting generalist/intelligent perspective on things, the kind of explanatory/storytelling power that is constantly demonstrated here, and that famous 'be intellectually rigorous and open to discussion' field he apparently emits to affect those around him. Of those, I suppose it's possible that the second doesn't happen at all in conversation rather than writing but it would surprise me.

None of that is to attempt to persuade Scott into accepting podcast invitations, but I do think there might be some invites he gets that don't fall under his characterization.

Expand full comment

Kaiser refereed me to you which resulted in the most awkward "yeah I personally know the guy who else can you recommend" I ever had to say.

Expand full comment

I think Unsong would actually need a lot of editorial work - just like anything published as a serial on the internet. Contrary to most of what is published as a serial on the internet it would be worth it.

Expand full comment

> DEAR SCOTT: What evidence would convince you that you’re wrong about AI risk? — Irene from Cyrene

I also am not sure what would be a good answer to this question, though I agree it's a fair one (and your answers are mostly what I would say, I think.)

That said, in our defense - we've been thinking about this question and hearing arguments and counter-arguments about it for a dozen years or so at this point. So it's probably ok to be *fairly* confident in our positions at this stage if a dozen years hasn't caused us to reconsider our position yet.

Expand full comment

“..unaligned AIs just can’t cause too much damage, and definitely can’t destroy the world.”

This is my belief. Nobody has really explained how the AI escapes the data centre. There’s a lot of “it can hack the internet” but no laptop can hold the AI on its own, we can shut off the data centre itself by cutting the pipe. Maybe it’s on two data centres? Cut them off. Job done.

Expand full comment

Re: Unsong—this physical (presumably fan-made) book version exists: https://www.lulu.com/shop/scott-alexander/unsong-public/paperback/product-24120548.html?page=1&pageSize=4 I'm not sure about the ethics of buying it, but it seems worth mentioning.

Expand full comment

One thing that didn’t come up on the question about AGI: what if convergent instrumental us goals automatically align the AI?

Does that seem impossible to you?

Expand full comment

A Conversation with Tyler might draw out some interesting opinions you didn’t know you had. Still, the value will be marginal in a world where you already have many ideas for blog posts that remain unwritten.

Expand full comment

I think people really underestimate how difficult it would be for even a genius AI to suddenly take command of the industrial machinery and use it to attack us etc. We can't even make good robots intentionally built for that purpose right now.

Expand full comment
Oct 25, 2022·edited Oct 25, 2022

"...favorite podcast?"

The comments on "personal life" and "opinions" struck me as exceptionally honest and well put.

Expand full comment

re: "but we kept not evolving bigger brains because it’s impossible to scale intelligence past the current human level."

This is clearly false. It may well be true of biological systems, because brains are energy intensive, and the body needs to support other functions, but AIs don't have that kind of constraint. Their scaling limit would relate to the speed of light in a fiber-optic. But perhaps AIs are inherently so inefficient that a super-human AI would need to be build in free-fall. (I really doubt that, but it's a possibility.) This, however, wouldn't affect their ability to control telefactors over radio. (But it would mean that there was a light-speed delay at the surface of a planet.) These limits, even though they appear absolute, don't appear very probable.

OTOH, the basic limit of the human brain size (while civilization is wealthy) is the size of the head that can pass through the mother's pelvis. This could be addressed in multiple different ways by biological engineering, though we certainly aren't ready to do that yet. So Superhuman Intelligence of one sort or another is in the future unless there's a collapse. (How extremely intelligent is much less certain.)

Expand full comment

Matthew Yglesias says he likes podcasts because they are basically immune to cancellation. People will hate-read your tweets and hate-read your Substack but they won’t hate-listen to a podcast because it’s slow and annoying. They may complain about you going on the wrong person’s podcast, but they won’t get mad about anything you say.

Expand full comment

Scott: What are you arguing with insurance companies about? If you don't accept insurance for your services, I'm guessing it's authorizations for prescriptions or referrals?

Expand full comment

A podcast with Tyler Cowen man, come on, we are all waiting for that.

Expand full comment

The question of evolution and intelligence and whether we could evolve more intelligent brains, hinges on several difficult to answer questions:

What is intelligence?

Neanderthals had larger brains than do modern humans, but its hard to say if they were smarter than us. Comparing artifacts made by Neanderthals and modern humans, its seems that modern humans were more innovative and flexible with technology, while Neanderthals were rather inflexible (and this may have lead to their demise). However, many carnivorous species that hunt spatially challenging pray (like dolphins or orcas) or hunt in groups, have bigger relative brains and are smarter than other species, so its possible that Neanderthals, who were more dependent on protein, were using their intelligence for solving multi-party or spatially challenging hunts, rather than innovation. So, who is smarter?

What is the relationship between intelligence and brain size?

Brain sizes are constrained by the physical world. For example, infant brains use 50% of infants' caloric budget and brains take 20% of our caloric budget in adulthood. 50% of a budget is huge for a small organ during infancy, and its unclear how much that could be pushed without sacrificing other important bodily functions, especially digestion and our immune system, which are other calorically expensive organs / functions. Brain size is also limited by the size of the birth canal. So the human model has already pushed the against some brain size limits. But this all depends on how brain size (at least adjusted for body size) relates to intelligence.

What kinds of brains could evolve if it weren't for developmental and evolutionary constraints?

It sort of turns out that there aren't too many types of brains on the planet, so its difficult to know what brains could be. Vertebrates have the largest brains, and all vertebrate brains evolved from a common ancestor, so they tend to have a similar micro and macro structures. All larger and smarter brains can only be made off of the building blocks of the brains that came before them. Which means, fish brains and mammal brains aren't that different in many important ways, and there haven't been any major reorganizations along the way. Its hard to know what a less efficient and more efficient brain looks like because the possibility space of brains is largely unexplored, even though there is variation worth studying within vertebrates, like neuron density and connectivity and relative proportions of brain tissue type. And also the occasional weirdo outgroup, like octopuses.

Evolution depends a lot on variation in developmental processes, or lack there of, and the evolutionary history of brains is that development is frontloaded in the early years of life. So, you can't build a bigger brain by increasing brain growth in adolescence or later in life, because that just hasn't been how evolution built brains. So we have these additional developmental constrains of, how much brain can you possibly build in infancy or early childhood. And as I've mentioned earlier, there are constraints on that.

Anyway, brains are cool.

Expand full comment

> “Sorry, I researched this for six hours but I haven’t learned anything that makes me deviate from the consensus position on it yet”

If you're ever in need of a (presumably?) low-effort blog post, a list of interesting-enough-to-research things that you have done a little research into and still agree with the "consensus" opinion on (with of brief explanation of what you think that consensus position *is*, exactly) would probably be more valuable than you give it credit for. There's a ton of information out there, and making it more legible what intelligent people think the obvious position is has inherent value, especially since large parts of your readership won't intersect with the sorts of things *you* read.

Expand full comment

"The world where everything is fine and AIs are aligned by default, and the world where alignment is a giant problem and we will all die, look pretty similar up until the point when we all die. The literature calls this “the treacherous turn” or “the sharp left turn”. If an AI is weaker than humans, it will do what humans want out of self-interest; if an AI is stronger than humans, it will stop. If an AI is weaker than humans, and humans ask it “you’re aligned, right?”, it will say yes so that humans don’t destroy it. So as long as AI is weaker than humans, it will always do what we want and tell us that it’s aligned. If this is suspicious in some way (for example, we expect some number of small alignment errors), then it will do whatever makes it less suspicious (demonstrate some number of small alignment errors so we think it’s telling the truth). As usual this is a vast oversimplification, but hopefully you get the idea."

This is, I think, the biggest belief among AI safety adherents that I don't agree with. I think that it is unlikely that an AI at or somewhat above human intelligence would be good at deceiving humans, and honestly expect even an AI with far greater than human intelligence to be pretty bad at deceiving humans.

To deceive a human, you need a deep understanding of how humans think and thus how they'll react to information. You then need to be good at controlling the information you provide to the humans to manage their thinking. There are two reasons I don't expect AI to be good at this.

First, one of the core issues with AI, at least with any AI based on current gradient descent/machine learning, is that their thinking is fundamentally alien to ours. Even the designers/growers of current ML algorithms really don't know what internal concepts they use and can't explain why they produce the output they produce. That strongly suggests to me that human thinking would be fundamentally alien to a self-aware AI. We can actually look at the code that governs an AI, and can feed it carefully designed prompts to try to understand how it thinks. All an AI would have to go on to understand humans would be the prompts humans feed it and the rewards/responses humans give to those prompts. Arguably an AI also would have access to human literature, but it's hard to imagine how much an alien intelligence could really learn about the inner workings of an alien mind from its literature, because the core concepts and drives in that literature wouldn't map onto the concepts and drives that are interior to the AI.

The second thing is that humans are very, very good at deception and at detecting deception. To the extent that our intelligence is designed for any specific task, that task is manipulating humans and resisting manipulation. An intelligence that was designed for, say, inventory management, or protein folding, would likely have to be much, much better at those tasks than humans before being able to match humans' capacity for deception or detecting deception. We're also used to looking for precisely the types of deception that AI safety people worry about: most humans are meso-optimizers, and any time one human makes another human their agent (i.e. hiring a CEO, putting a basketball player on the court, hiring a lawyer, etc.) we need to be able to tell whether their behavior in the training environment will match their behavior in the real environment (i.e. are they waiting to start insider trading until they get promoted, do they pass in practice in order to ball hog during games, and do they talk tough in their office only to quickly settle in the courtroom?)

Obviously, all of this can only suggest that AI are unlikely to be good at deception, not that it's impossible that they're good at deception. But if the likelihood that a smarter-than-human AI is able to deceive humans is only (say) 1 in 20, that means there's a 95% chance that our first smarter-than-human AI that is created will not be deceptive. Since we're likely to know vastly more about how AI work and how to build them safely once one exists, this would suggest that current AI safety research only has a 5% or so chance of being useful even if a smarter-than-human AI is created with objectives misaligned to human objectives.

Expand full comment

"There are things we could learn about evolution that would be reassuring, like that there would be large fitness advantages to higher intelligence throughout evolutionary history, but we kept not evolving bigger brains because it’s impossible to scale intelligence past the current human level."

I'm pretty sure this is already falsified by the existing amount of intelligence variation between humans. Plus, even if John Von Neumann were the upper limit to intelligence, being able to run thousands of John Von Neumann tier AIs on 2 kilograms of GPUs each (1 large human brain mass) would be quite transformative.

Expand full comment

I don’t know. Three hours with Rogan and you sounds pretty interesting. Rogan tends to be a little credulous, but his conversations are never boring.

Expand full comment

What are the answers to the questions you get asked least often at meetup Q&As?

Expand full comment

> Thinking about it step by step

I see what you did there

Expand full comment

Loved the podcast-answer. Though who would not wish for Scott in a Conversation with Tyler. ;)

I find podcast usu. a waste of my time to listen to. If there is a transcript, I may browse through. - The wish to see and hear a certain writer borders on Beatlemania and relic-worship. Both understandable, esp. in the case of the author of ACX/SSC. Maybe Scott could send each major ACX meet-up a worn T-shirt or sock for us to look at, touch or smell - in short: Venite adoremus!

Expand full comment

Re AI safety: seems like one more possible reason for optimism would be if (for whatever reason) the AI is regulated into stagnation by all relevant governments, and stays that way for X years? Not a reason to declare a total victory of course, but to update upward on our chances of survival.

Expand full comment
Oct 25, 2022·edited Oct 25, 2022

People REALLY underestimate the importance of payment capture. You bill a couple hundred people a year at $100/pop and even a 97% success rate on collecting money ends up being a lot to miss out on especially if it is some proejct you are running as a charity or as a favor.

"Oh why are you such a hard-ass about making sure eveyrone pays?" Umm because if I am not, instead of bleeding $600/year I am bleeding $2000/year.

Expand full comment

Do you enjoy camping?

Expand full comment

The straussian read is that you want to go on my podcast.

Expand full comment

I almost want to say Lorien proves that under the current American medical system, anybody who has anything better to do has no incentive to practice psychiatry at scale, unless they can get paid a ton, which disadvantages those who can’t pay a ton.

Expand full comment

This is a desperately superficial view, sorry. On AI risk, I would be astonished if anything "spooky" could be set out as an explanation of how humans think and act (This seems a better formulation than how "the brain" works; isolating the brain rather than thinking about the whole person seems a curious new form of dualism).

But equally, I find it difficult to see how this relates to debates on AI risks. The thing about AIs is that they don't seem to be like people. They do different things. Where they do the same thing (calculate 552/37 to 2dp) it's not so much that they do it in different ways as that wholly different types of thing are involved. The chemistry of the person and the chemistry of - well, robots and computers - are wholly different.

It doesn't seem to me that comparison to people helps very much either way in the AI debate. The AIs we are building and which may themselves build AIs are not notably like people - or perhaps one might say that optimising for some measure of intelligence and optimising for apparent similarity to a person seem to take you in different directions. There are other things we know of that can reasonably be said to be intelligent but not in the same way and through the same processes as people show intelligence: for example, insect colonies. An AI won't perhaps be very analogous to a bee hive, but it won't be particularly analogous to anything else that we know about.

If that's right, I think it's a nudge towards the gloomier side of the argument. We can worry about how good an AI is at replicating judgments or language use that most humans find simple, but that's not a necessary stop on the way to powerful AI. AI could, as it were, conquer territory while leaving this particular castle unchallenged.

Expand full comment

An imagined podcast interview wouldn't be about any of these topics in isolation, but about your process of research, analysis and writing about this plethora of topics and how you learned to do this.

You seem to underestimate the uniqueness of your point of view. One does not have to be the very best at one thing to talk about it.

Since you apparently hate the podcast medium with every fiber of your body, I hope to see you discuss this in some other format at some point.

Expand full comment

How would you update if an AI escaped and caused damage but did not destroy everyone? On one hand, this proves AI is not aligned by default, but it also proves that under those conditions AI smart enough to escape is not necessarily smart enough to win, and that misalignment is sometimes detectable without X-risk. (It also causally provides a kick up the political backside for AI governance that, in the worlds where the first sign of problems is DOOM, never exists.)

Expand full comment

Have a somewhat giddy reason for not being completely worried about AI.

Suppose that substantial human augmentation is possible, probably by more direct mental contact with computers. Maybe add in some biological method of intelligence augmentation. Suppose also that a computer greatly increasing it's capacity is harder than it sounds. It's certainly true that humans trying to revise themselves (see various cults) can go badly wrong, and we've at least got a billion years of evolution building resilience and people do sometimes recover from bad cults.

Suppose that these two premises aren't contradictory-- it's easier to greatly increase human intelligence than machine intelligence.

There are a lot of augmented people. They aren't all aligned with each other. They aren't all aligned with the interests of the human race. However, they create an ecosphere that is definitely hostile to being taken over.

Expand full comment

"The literature calls this “the treacherous turn” or “the sharp left turn”."

The "treacherous turn" and the "sharp left turn" are two distinct dangers and not two words for the same thing.

Treacherous turn: The AI becomes unaligned and aware of its unalignment while still being weaker than humans. It pretends to be aligned for a while longer until it is stronger. Then it acts out it's actual objective and turns us into paperclips

sharp left turn: The AI seems aligned to human values (also to itself), until it suddenly hits capabilities well. It suddenly gains a lot of generalization power on its capabilities, but its values are still a crude approximation of what counted as human values in its training environment. After suddenly becoming very capable (possibly stronger than humans), it proceeds to enact its crude approximation of human values and turns us all into lizards basking in the sun.

The difference is very important, since solutions that work against the treacherous turn (for example, constantly asking the AI in parseltongue, if it wants to betray us), might not work against the sharp left turn.

Expand full comment

On the Straussian reading topic: I'd say the one exception I've noticed so far is when Scott talks about technologies with Very Promising Nicknames for prediction markets that exclude connections from people in the US for regulatory reasons. That might be obvious to anyone "techy" but maybe not to someone who stumbles across the blog looking for information on anxiety disorders or something.

Expand full comment

> I try not to lie, dissimulate, or conceal Straussian interpretations in my posts.

ok but if I wanted to conceal Straussian interpretations in my posts I would probably make a FAQ post and then include a question about Straussian concealment in the FAQ just to deny it.

Expand full comment