OC ACXLW Instrumental Lying in AI. Geography made the US OP.
We are excited to announce the 49th Orange County ACX/LW meetup, happening this Saturday and most Saturdays thereafter.
Host: Michael Michalchik
Email: firstname.lastname@example.org (For questions or requests)
Location: 1970 Port Laurent Place
Date: Saturday, Nov 18, 2023
Time: 2 PM
Conversation Starters :
Technical Report: Large Language Models can Strategically Deceive their Users when Put Under Pressure
How Geography Made The US Ridiculously OP
ChatGPT summary plus some additional notes:
Question: Do we underate the importance of geography in such a way that we overate the efficacy of the “american system” or “american people” and incorrectly think it is the best system in the world for productivity?
Walk & Talk: We usually have an hour-long walk and talk after the meeting starts. Two mini-malls with hot takeout food are readily accessible nearby. Search for Gelson's or Pavilions in the zip code 92660.
Share a Surprise: Tell the group about something unexpected that changed your perspective on the universe.
Future Direction Ideas: Contribute ideas for the group's future direction, including topics, meeting types, activities, etc.
Other than for humor, is it ever better to utilize the word "utilize" rather than use? I can see no use for "utilize", and always take it as a sign the writer wants to seem more impressive than they are, or sound important, or something.
Was really impressed by Curtis Yarvin’s recent appearance on Razib Khan’s audio substack. The subject was, of all things, poetry. Yarvin says at the beginning ”Everyone is interested in poetry, they just don’t know it.” And then claims mid-20th century American poetry is one of the high points of modern civilization, mentioning Robert Lowell as an example. He then read a poem by a 20th century Greek poet whose name I didn’t catch.
Yarvin comes off a lot saner when you hear him talk, hear the humor in his voice, than he does on the page.
When Razib asked Yarvin his opinion of AI extinction risk, he responded within the context of poetry. I’m going to paraphrase now and apologize for what I don’t get entirely accurate, but he says something like: Eliezer Yudkowski, because he’s a Rationalist, believes he is using his left-brain when he thinks about x-risk when he is actually using his right-brain. He is creating narratives.
It seems so obvious when it is put like that.
He then talks about LLMs and credits someone with calling them correctly “intuition machines”. Sticking with the right-brain left-brain theme, LLMs are right-brained. (I’m still paraphrasing.) LLMs are very creative but they suck at logic.
It reminded me that what has spooked me most about AI art is how surrealist it is. How well it captures a dreamscape. It is much more Dali than Da Vinci.
He then offers another reason (other than AI is no good at logic) why doomer nanobot scenarios make no sense: AI isn’t good at number crunching. Engineering breakthroughs such as the creation of nanobots will require advances in number crunching. He mentions how we still can’t simulate water boiling -- something which we know all the physics about -- because it’s too computationally intensive.
So: AI’s are right-brain thinkers (And so is Yudkowsky without knowing it) that are bad at logic and math. The apocalypse is not nigh. Robert Lowell is a great poet.
Who here has had experience with IFS (Internal Family Systems) therapy? At first I found it an intriguing idea I could add into my SF work in progress, now I'm finding it applicable to my life and relentless attempt to grasp at sanity, sometimes successful.
Briefly the idea is that there is a core self, and various 'parts' which have arisen to protect it. Some are exiles of feelings too painful to feel, some are firefighters to keep the exiles from lighting up, some are managers to proactively protect. It all seems to work for me. Opinions?
I found this on 4chan. (OK lie: I posted it on 4chan. And it is literally the truth.)
I'm curious about other people's reaction to this true story. As in, I appreciate that it's quite hard to believe, so how much evidence and of what kind would you need in order to fully believe that I am telling the truth?
I want to talk about panspermia, which is one of the dumber ideas that I think people take too seriously. Inspired by this article: https://www.space.com/comets-bouncing-seed-life-on-exoplanets
As far as I can see, the relevant quantity we want to estimate is: given that life originated on a particular planet at some point during the age of the universe, how many other other planets (in other star systems) should we expect this life to have spread to by the present age of the universe? If this number is 0.001 then by finding ourselves on a planet with life we can assume it almost certainly arose here originally, if this number is 20 then it's most likely that life originated elsewhere.
If life did indeed originate on Earth then how many other (extrasolar) planets should we expect it to have spread to? I think the number is very much less than 1. Collisions that knock material from Earth into space are very rare, collisions that will knock Earth material clean out of our solar system are even rarer. That a given chunk of such material would eventually reach another star (within the few billion years available) and crash into a rocky planet/moon is very unlikely; that this rocky planet/moon has conditions conducive to life is also very unlikely. And then, the chance that some form of life was on that rock and somehow managed to survive the entire trip adds another layer of unlikelihood. I'm sure it's possible to estimate some of these terms numerically, but I reckon that if we multiply out all these unlikelihoods then we get something pretty small.
Admittedly, Earth may be a particularly bad seeding point; we're a large planet with a fat gravity well which rarely gets hit with sizeable objects.
Recently, I idly wondered how much it would cost for the US to host the Summer Olympics again if done as cheaply as possible. I figured the best bet would be to hold it in Atlanta or maybe LA because they'd already hosted the olympics and could presumably reuse some of the existing infrastructure. I figured it was all just a silly hypothetical though, since I couldn't imagine it ever actually happening this way.
And thus I was very surprised when I tried to research it today and immediately discovered that *LA was already chosen to host the 2028 olympics*. I almost feel like my daydreaming rewrote reality.
Does wanting an empire make sense today?
There was a time when conquering territory created a buffer zone between you and your enemies, gave you more resources in the form of fertile land, taxpayers, slaves, soldiers, and/or militarily strategic geography. Today land just doesn’t matter as much, as Singapore proves. Natural resources still matter, but not nearly as much as once upon a time. They are a blessing and curse, nowadays.
It’s easy to understand why Catherine the Great wanted an empire. I find it hard to understand why Putin does. Or why China might.
Now, I get why the US wants a military empire. A hegemon that keeps the peace, controls the high seas and keeps global trade going is worthwhile for everyone.
But does it make sense for Russia or China to even *want* an empire?
Many people have said that we shoulda seen Putin’s invasion of Ukraine coming because that’s what Russia does. Russia wants to Russia, to expand its empire. OK. Maybe. Recent events validate that view. But while it’s easy to see what Russia had to gain from expansion in the 18th and 19th centuries, it’s hard to see what Russia has to gain from expansion in the 21st. Am I missing something? *Does* Russia have something to gain by expanding today? Or does Putin prove that ideological and cultural inertia matters more than reason?
I’m actually more interested in China than Russia, since China is the country the US is more likely to fight in a war in coming years. Should we consider China an expansionary power because Chinese history says so? Not recent Chinese history of course, but, um, ancient Chinese history. Does China have much to literally gain by expanding into a global empire, or is the idea of expansion for China a case of mental inertia like it is for Putin?
TIL: Up until 1911, Congress passed an apportionment law after every census which *manually* set the size of the House of Representatives and its allocation, typically increasing it every decade in response to the rapidly growing population. However, following the 1920 census, Congress was unable to agree on a new apportionment law and had to continue using the 1911 apportionment. They only managed to break the impasse in **1929**, when they compromised by establishing the current system where the House is fixed at 435 members forever* and reapportionment happens automatically.
* Except it apparently briefly went up to 437 when Alaska and Hawaii joined. I don't understand why they don't just keep it permanently higher when a new state joins so that other states don't have to automatically lose seats.
Does anyone here use compounded semaglutide, and if so, can you report on your experience?
(context: Unsurprisingly, there's a semaglutide shortage. My doctor recently noted compounding pharmacies as a potential alternative source -- not so much as a recommendation, just "this is an option that exists". My initial googling turned up scary-sounding reports that might indicate that it's a bad idea, or might just be a FUD campaign. Before I dig deeper, I figured I would check if someone here already has.)
I posed a philosophical question to some people which later occurred to me is relevant to the "body integrity" issue debated about kidney donation. The question was this: would you give up a pinky finger for $1 million? The finger is lost forever, so you cannot spend money to get it reattached, but prosthetic replacements would be fine.
I think those that value body integrity would decline the deal, and those that do not would accept it. With this EA audience, I would not be surprised if some people donated the money effectively instead of keeping it themselves.
The question continues, changing the amount to $1 billion (if you refused the first deal). This would be much harder to turn down, especially if you have a good idea of what $1 billion can actually do.
Yet I have considered this question in the past, and concluded I would refuse the deal, for I would be reminded every day, by the missing finger, that I effectively sold my soul to gain my current position. It should come as no surprise I would not choose to anonymously donate a kidney (though I applaud those that do).
The Israeli administration's latest hilarity : Hamas is literally Hitler.
From https://www.timesofisrael.com/herzog-arabic-copy-of-mein-kampf-found-on-hamas-terrorist-shows-what-war-is-about/ :
> President [of Israel] Isaac Herzog on Sunday displayed an Arabic-language version of Adolf Hitler’s autobiographical manifesto “Mein Kampf” that he said was found in a children’s room used as a base by terrorists in the northern Gaza Strip.
> “The terrorist wrote notes, marked the sections, and studied again and again the ideology of Adolf Hitler to hate the Jews, to kill the Jews, to burn and slaughter Jews wherever they are,” he said. “This is the real war we are facing.”
In what is possibly the most low-effort propaganda act by a head of state in the 21st century so far, the guy didn't even think of planting the book in a room or on a dead body, his audience's intelligence is apparently not even worth a staged video to him. In the bizarro world in his head, it's enough to hold a translated copy of Mein Kampf to declare any armed militia speaking the same language to be literally Hitler.
Anyone successfully managed to find a way to treat airplane headaches? (Headaches caused by the change in pressure when an airplane starts descending for landing)
Google just suggests various painkillers, which don't really work and also seems like it's treating the symptom more than the cause.
Basically, whenever the plane starts descending, I need to swallow every 10 seconds to equalize the pressure until landing (unlike a normal person who only needs to do this every 10 minutes or so), or I will get an "arnold schwarzenegger on the surface of mars" level headache . I currently treat this by drinking water and chomping on chips nonstop during the descent, which.. I guess is not the worst thing in the world, but there's still a lot of distress involved, so I'm very open to ideas
"The modern Japanese writing system uses a combination of logographic kanji, which are adopted Chinese characters, and syllabic kana. Kana itself consists of a pair of syllabaries: hiragana, used primarily for native or naturalised Japanese words and grammatical elements; and katakana, used primarily for foreign words and names, loanwords, onomatopoeia, scientific names, and sometimes for emphasis. Almost all written Japanese sentences contain a mixture of kanji and kana. Because of this mixture of scripts, in addition to a large inventory of kanji characters, the Japanese writing system is considered to be one of the most complicated currently in use."
And people complain about spelling in English!
Maybe people complain about Japanese orthography, too, but I don't see it because they're doing it in Japanese. Maybe there are people who say fuck it, and experiment with writing everything in kana.
Maybe the issue is that English is relatively close to being phonetic (80%, I'm told) so it seems like an easy solution should be possible. Easy solutions ignore how dependent people are on word shape when they read.
How much difference does having a phonetic language make for a culture-- it seems like a lot of valuable learning time for children gets sucked up when there's a lot to learn about spelling, but do cultures with phonetic spelling (Hebrew, Spanish, probably more I don't know) show an advantage?
Is there a good way to evaluate the complexity of a language, including spelling, complicated grammar, arbitrary gender for nouns, etc.? I know there's research on how difficult various languages are for Anglophones to learn (the military has a rating system) but is there anything for overall processing effort for native speakers of various languages?
Anthropic is currently hiring for a non-AI research role that I am (plausibly) qualified for. I know that that have a much stronger safety emphasis than most AI companies, but feel a bit wary of working for anyone doing capabilities research. If we assume for the sake of argument that I would be better than the hypothetical replacement candidate is anyone willing to make an argument for why I should(n't) apply.
Does anyone else think that this photo looks like n zbhagnva cbxvat guebhtu n ynlre bs pybhqf at first glance? It's an interesting illusion.
TIL: The streets of ancient Kyoto were significantly wider than they are today. Apparently, Kyoto was originally a planned city on a grid with massive streets. It's a striking counter the usual trope of old cities having narrow streets because they weren't designed for cars.
Is there an existing concept for describing how people like to think about the future in terms of sci-fi (or fiction in general) rules, even though there's no reason why the future has to follow such rules? Specific examples:
- Before COVID, people had a sort-of cataclysmic image of what a pandemic would look like - i.e. see the movie Pandemic from 2011. Lots of books talked about a horrible virus that would break down the global economy and threaten civilization as a whole. Bill Gates warned us about it and George W. Bush famously set up a task force to fight a future flu pandemic. But reality was... much more boring? Yeah, there was some disruption with COVID but things quickly bounced back to normal. There was some civil unrest in the US but it was mostly for pandemic-unrelated reasons. There was a race to make a vaccine but it ended up taking close to a year and a bunch of people already had the virus by then. In other words, reality was far more boring than we've imagined it would be.
- When people imagine what life with an AGI/ASI would look like, they always talk about either complete annihilation of humanity or something like the Matrix movies or perhaps an anti-utopia where human labor no longer has any value and we all end miserable due to being useless. Or in the case of Star Trek the AGI is smart but not _too_ smart so human decisionmaking still has some value. However no one seems to imagine boring outcomes, such as the machines inventing a "super drug" that would send everyone into a state of pure bliss for millions of years in a row or perhaps rewiring everyone's brain to remove the need to be "useful" in order to experience happiness. It's a very boring future to predict so... people don't seem to be predicting it?
- When discussing brain uploads (i.e. Robin Hanson's EMs) people seem to assume that "virtual humans" would still be sort-of like real world humans, except they'd run inside a machine. This leads to some anti-utopian predictions of virtual humans being exploited/abused and living a miserable life. But why wouldn't virtual humans simply be rewired to remove the ability to experience any suffering or sadness, thus resolving all the moral dilemmas around their existence? It's a boring answer but isn't it also the most plausible outcome?
My examples are sort-of vague but I hope I was able to write down the general idea of how the need to be interesting when writing fiction interferes with our ability to make boring predictions about the future.
I want someone to open an AI art gallery near me, where you can buy nice AI-generated Giclee-printed art for your walls at a fraction of what it would cost with a human artist.
I know I could buy AI-generated prints on the internet, or AI-generate my own and send them somewhere to be Giclee-printed, but I'd rather buy it from a shop so I can stare at it in full size for a while before deciding I'm interested in hanging on my own wall.
Does this sort of thing exist anywhere in the world yet?
I believe this post discusses what you are thinking of:
The book itself is arguably outdated (and may not have been that accurate to begin with), but I think Scott's thoughts on it might useful to you.
I saw a meme mocking weeaboos with the dialogue "I wish Japan had won WWII." This is risible for many reasons, but the most salient might be that all Japanese pop culture is downstream from their *loss* in that war. That alternate universe must have very different media from ours, but *how* is it different? Is it all State Shinto propaganda all the time?
The Wise Master says "Get weird about that thing you're weird about...Make your nest in that closed loop of happiness and be unbound from the unceasing whirl of judgement."
The implication is that this experience is universal, but what if it isn't? What if there are people who aren't weird about *anything*? I'll call such a person a "King of the Normies": their lives are optimized around status-seeking to the point that it would never occur to them to do anything for any reason other than the approval of others. This is somewhat similar to the "NPC meme," but that meme is wrong in an important respect: while the NPC is depicted as being "below" the reader in some important way, a KotN is likely at the very top of whatever status hierarchy is relevant to them.
It then follows that people with immense power over me likely have a thought process that is completely alien to mine: surely a dangerous situation for me. Since a KotN would never bother to understand me (where's the status in that?), I'd better create some mental model of them, but how? I might read media optimized for a KotN, but they probably derive status from consuming media I've never heard of, so this has its limits. I've found what I think is a workable approximation in media targeted at people who live in New York City, but maybe there's a more efficient filter?
TL:DR: I am a bug trying to avoid being stepped on, and I'd like a way to figure out where the giant is going to put his foot next.
I have written stuff on my blog before that carries the subtext that I myself am enlightened.
Well, I am OUT of that closet, I don't CARE what Daniel Ingram says, I AM a FUCKING ENLIGHTENED SUPER-BEING (technically, so are you), and in this FUNNY and HERETICAL essay, in which you will LAUGH and which will cause you to SUBSCRIBE to my BLOG, I share the story of how I got ENLIGHTENED, before taking Buddha, Jesus and Muhammad down a peg or three!
My Beautiful Dark Twisted Enlightenment
I think Scott has discussed how the number of lawyers in America grew rapidly beginning around 1970. Federal bills in the US have also gotten longer, on average, over time. In general, there are more pages written by legal professionals impacting the functioning of American business and society today than before 1970.
It's easy for me to think of the downsides of increasing red tape, etc., but there are likely significant benefits as well. What are some of the benefits of the increasing "legalization" of society? Are there counterfactuals of developed nations that haven't had an increase in legalization?
The two positions I know of on parking are "inertia, keep the current system" and "ban cars, ban parking, ban everything, die die die".
It's always seemed to me like the obvious solution is to incentivize (through zoning? tax breaks? subsidies?) giant multi-storey parking structures anywhere dense enough to support them, allowing lots of parking with a comparatively small land shadow. Is there some reason nobody talks about this? Does anyone else think this is the obvious solution?
Why is Effective Altruism (EA) controversial?
My understanding of EA is, that it wants to make sure money donated to a charity actually reaches the intended victim that a donor to the charity is trying to help.
This seems pretty non-controversial to me. What am I missing?
I'm learning to play an electronic musical instrument called [Linnstrument](https://www.rogerlinndesign.com/linnstrument). It looks like [this](https://cdm.link/files/2014/04/linnstrument1-480x640.jpg). It's a MIDI controller with much worse velocity control than on a piano but it has advantages too: I can coarsely control the timbre of each note by slightly moving my finger, I can do glissandos like you can do using vocal cords, and it's very easy to transpose because there are no white and black keys and instead the layout is like frets on a string instrument with each next string being a fourth of the previous one.
I want to learn to self-accompany my singing on it - like a person with a guitar or a piano might do - sing and play at the same time. I mostly want to sing pop, blues, pop rock songs. There exist a few thousand copies of this instrument in the world so there aren't any in-depth study resources for it. Please recommend me how and what I should study to become able to self-accompany. My prior musical background is that I have basic music literacy - notes, greek modes, chords, intervals, degrees, etc. My ear is not very trained. I've been playing a diatonic harmonica for one and a half years.
Does anyone have experience attempting to reduce LP(a)? Mine is at 42 ng/dl . I recently starting taking statins and LDL is down significantly, but I don’t like this LP(a) situation.
Wondering if anyone else here reads "Kill Six Billion Demons" (https://killsixbilliondemons.com/).
I've been loving the latest storyline with Solomon David and White Chain. I find Solomon David a really interesting character and his initial interactions with White Chain ended on...kinda a cheap note. I'm really liking how many similarities the author is drawing between them and I'm curious where it goes.
For those of you who don't read Kill Six Billion Demons, um...
In a universe of super-powered Hindi martial artists, a Super Saiyan god king forms the ultimate patriarchy until a transgender, transracial angel punches him and guilt trips him into forming a democracy. Then stuff happens and the transgender angel gets put in charge. Three years later, the transgender angel tracks down the super saiyan god king, who's now just chilling and fishing, and tries to put him in charge again because he/she/they/it hates being in charge.
It's good, I swear!
Question for trans rights people: Are there any specific legal/policy trans issues that are meaningfully important to change in themselves (and not for their cultural symbolic values)?
e.g. for gay marriage the debate itself was mostly symbolic but there really are some marriage benefits (like default inheritance laws or whatever) that are meaningfully important in themselves aside from the social acceptance signal.
Antidepressant or Tolkien character?
https://antidepressantsortolkien.vercel.app (not my work, but I thought of this community)
The 5th Circuit just cited Scott in its recent ruling against the ATF, on page 48, footnote links to "All in All, Another Brick in the Motte."
Is this the first Federal appellate decision citing SSC?
In Matthieu's response to Seth's comment about "A Void", he proposes "What your Wiki should do, is build a spot for parody (or playful imitation) which is cut off from Wiki's 'main' part."
Which is pretty much my understanding of how TvTropes came into being. The page itself started out as a BtVS fan community project in 2004, but happened to coincide with an internal conflict among Wikipedia editors over how tightly "encyclopedic" Wikipedia should be with the minimalists coming out on top and systematically eliding "fancruft" from media pages. Said fancruft, along with the editors who liked it, found a hospitable home at TvTropes.
TvTropes has since implemented page tabs specifically for minimalist and maximalist interpretations of its own content: the default pages are quite a bit broader in scope and freer in format than corresponding Wikipedia pages (beyond the obvious catalogs of tropes on media pages and examples on trope pages), but there's also "Laconic" tabs for brief just-the-facts descriptions of the tropes and "Just For Fun" tabs for goofiness that would get in the way of clarity on the main page. In addition, TvTropes has grown a "Useful Notes" section of pages for real-life background useful for understanding media, many of which pages are actually really good encyclopedic articles and are sometimes arguably better than the corresponding pages on Wikipedia.
So the standard explanation as to why Netflix took over from cable is that streaming services can show you any show, any time, and are not limited by just showing 1 show at a time in a particular time slot. Netflix got its start because cable networks (I think Star was the first one) leased them their old back catalogs that weren't being aired anyways. It snowballed from there.
But technologically- is this true? Can you really not serve up shows on-demand via a cable line? Imagine that Comcast (I know, I know) had a Netflix-like UI where the customer selects a show from the Comcast servers. Why can't Comcast serve it up over the cable line the same way Netflix serves it up over the phone line? What are the technological hurdles here? Maybe the UI signal travels to Comcast over the phone line, then the show comes back to the customer over cable.
I'd be much more interested in discussing the technological obstacles to this than anything else. I mean we all know Comcast is not innovative, but let's assume for a moment a competent smart Comcast. Engineering-wise, do cable lines just not work this way? Why?
I just came across a fascinating essay by John Tooby, one of the founders of evolutionary psychology, on tribalism, what he describes as "evolved neural programs specialized for navigating the world of coalitions—teams, not groups. ... These programs enable us and induce us to form, maintain, join, support, recognize, defend, defect from, factionalize, exploit, resist, subordinate, distrust, dislike, oppose, and attack coalitions. Coalitions are sets of individuals interpreted by their members and/or by others as sharing a common abstract identity (including propensities to act as a unit, to defend joint interests, and to have shared mental states and other properties of a single human agent, such as status and prerogatives)."
He argues that there was strong evolutionary pressure to develop such programs, something difficult enough that most species have not been able to do it, and points out one downside:
"Forming coalitions around scientific or factual questions is disastrous, because it pits our urge for scientific truth-seeking against the nearly insuperable human appetite to be a good coalition member. Once scientific propositions are moralized, the scientific process is wounded, often fatally."
I remember someone, inspired by Scott's "Every Bay Area House Party" wrote something like "Every NYC House Party", I intended to read it but was never able to find it. If I didn't hallucinate this, and someone knows what I'm talking about, can I please get the link?
Is anyone here buying individual bonds, as opposed to a bond fund? Bonus points if the bonds that you're buying aren't Treasuries. I'd be interested to hear what the reasons for buying individual bonds are- it's complex, less liquid, and has higher transaction costs, right? And while bond funds charge (low) fees, they're also supposedly getting a better price than a retail investor on each bond- right?
Having recently been rereading the “Sadly, Porn” book review (https://www.astralcodexten.com/p/book-review-sadly-porn) and having encountered the following passage:
> But - don’t laugh - a lot of the time when I listen to music, I find myself fantasizing about being the person who wrote the music, or playing the music in front of a big audience while everyone applauds me, or something like that. It seems that my enjoyment of music - maybe not quite as primal as sex, but still pretty primal - actually *is* at least assisted by status fantasies.
I wonder what Scott feels or would feel about rhythm games in that light? As with many game/reality distinctions, the framing often injects a sort of protagonism into things: Guitar Hero might be the most overt example with it right there in the title, but idol games also involve “star performer wowing the audience” fantasies. On a more skew note with dance rather than music proper, there's Elite Beat Agents, where the group you're playing dances well enough to magically inspire people out of their life problems. Notably, the backing tracks are generally prerecorded, so while there may be a high skill ceiling in the gameplay proper, it's partially disconnected from the music itself (though some games do have some dynamic sample/track interactivity).
Looking for recommendations: if I wanted to make a turkey or ham sandwich without meat, what could I use instead? Requirements are that it requires no cooking, is high in protein, and tastes decent. I have approximately no experience with meat substitutes, so even information that is obvious to people in the know is welcome.
Does anyone know of good models for how religious conversion works? It makes some sense to me that one would personify and make rituals about ex. rain or sun, and that would develop a mythos over cultural time. I can also understand that for some people materialist/atheist explanations of life are deeply troubling, and they turn towards a spiritual tradition. But if you already believe in Odin, and some weird foreigners start telling you about this guy who was the son of God (there's just the one apparently?) and died for you personally somehow... How does that become convincing? I know that a lot of the wide-scale conversion involves politics and conquering and such, but I'd think enough people had to already be Christian for it to matter politically. It can't be all conquering. I listen to/read a lot of religious media and usually seems like it needs belief to be present already to be convincing. I really do not care that God made a huge sacrifice by killing his son so I would have a chance to be saved because I don't think any of that happened. Conversion stories I've read usually have this gap:
feels Something Missing (in current religion or lack of one)
hears about The Religion
learns some semi-interesting fables/has a personal sense of Something
...and that's why The Religion is just so convincing and powerful and true.
What? Huh? Anyone who has books, articles, or just personal experience that could clarify this mental maneuver would be appreciated.
Bad news for nuclear, again https://www.science.org/content/article/deal-build-pint-size-nuclear-reactors-canceled
Seems like the same old problem of too high a cost for too little benefit. Specifically the cost of concrete apparently, and at this point it’s hard to beat solar + batteries on cost effectiveness
There's a new random picture up for the ACX google info page. Who will Scott be next time? Is there a manifold market set up?
I could use some thoughts and ideas.
I'm working on something and I'm stuck between two poles. The first is the pure mathematics and logic of the system I'd like to build. The second is the practicalities of firming up a "good enough for now" version and implementing this in software.
The project is a Bret Victor Future of Coding inspired thing, and in short it is this: I want the user to be able to build up a "model" of the domain they're trying to write software for. This isn't a "model" as in the typical Rich/Anaemic Domain Model built out of classes. This is more like a language-agnostic attempt to identify the conceptual units and moving parts within a field, in a cleave-reality-along-its-joints sense. It would be displayed as blocks and lines and diagrams and whatnot.
My intuition says that if the model maps correctly to reality, and every part in the model can move in the same way as its real life equivalent - then any coding task that can described in reality will always be talking in terms that the model already understands. Ie: future features are always certain to be straightforward to add, and the underlying model never has to be altered.
The idea is the user builds up this model out of tools I give him, and the process becomes almost philosophy (what is a "Task"? Is it inherent to a Task that there is always a "start time" and an "end time", or are those concepts in fact external to the Task itself?) Once the model is built, it can be (possibly semi-automatically) turned into the classes and db schema of a more traditional Rich Domain Model.
My question now is, what tools should I be giving the user to build these models? So far I'm thinking along these lines:
- you need "conceptual units", the nouns and verbs that have meaning in your system, and whose implementation might look like data objects for nouns and functions for verbs.
- you need relations, telling you how two entities relate to each other, in the same way that when people try to explain things on whiteboards, they instinctly start connecting blocks with coloured lines.
- you need some way to grant meaning ("adjectives"?) to your concepts, such that you can ask human-readable questions about them. (Ie, if you're running a shop and you frequently use the phrase "out of date" for stock items, then your model and system should also know what "out of date" means. That way, code that implements actions can be read in the same terms as humans thinking about them.)
- you need behaviour/reactivity/autonomous actions, I think. At the very least, conceptual units need to "defend their borders" and act/allow themselves to be edited in only the ways that their real-world counterparts can. But I've also got caught up in the idea of reactivity, especially for systems with processes and knock-on effects. If you perturb the system in one place, it should know how to respond and ripple out in the same way as would happen if you prodded the real world system.
- I need a sane UX for all of this.
The whole idea is to save the user from having to think about the technical implementation details of setting up this system, and let him just focus on the formal rules. That means I've got to be the one building and implementing everything he could ever need, here and now. Ideally in such a way that using the tools always feels natural and he doesn't immediately start needing ugly workarounds.
I feel like I don't have a solid framework to work with here. Is there any concept analogous to Turing-completeness, where I can know that I have the right set of tools and therefore any required model will be easy to build?
Just in general, does anyone have thoughts?
Lately I've been thinking more about reputation systems online (and how we don't have any good ones). It would be cool if we could see a profile through different lenses, instead of just one single number of followers. Foe example, I would love to be able to see what just my engineering friends think of someone, instead of the whole internet
In general discussion of the call for a pause on AI, here's why I think that is a non-starter. This is an example of what Microsoft is selling to businesses in regards to AI (they keep sending me stuff because I did some purchasing for the workplace) and it's not "let's all just take a moment to stop and look around":
"In conversations with customers and partners, organisation and culture frequently emerge as critical factors for success.
According to Gartner: “The pace of AI technology maturation and diverse approaches make it difficult to capture and sustain value from AI initiatives. Effective AI operating models that leverage current investments in people, processes and technologies enable IT leaders to drive successful AI initiatives.” It can mean the difference between AI projects that are viewed as science experiments and those that become significant value-drivers.
An organisation’s ability to manage change is also a critical driver of AI success. “You very quickly learn that by the time you succeed with something, it’s already outdated,” says Mikkel Bernt Buchvardt, Director of Data and Analytics at SEGES Innovation. He suggests embracing this reality, rather than letting it slow you down. “You can keep gold-plating your methods, or you can make it good enough to deliver some value.”
They do address the topic of AI safety, but according to them none of their customers have concerns about runaway AI paperclipping us all/hopes that AI will make everyone from a Papuan head-hunter to Emperor Jeffy himself rich, transhuman, and eternal:
"Given the pace of AI innovation, the most frequent questions we hear from customers are: Where do I go from here? How do I create the most impact? What does success look like?
• Review and share resources on responsible use of AI to identify the models and approaches that best suit your organisation.
• Consider the enablement model that best fits your needs, such as hub-and-spoke, centralised or distributed.
• Consider the principles of secure AI and how to ensure your data is protected end to end from platform to applications and users.
• Consider the processes, controls and accountability mechanisms that may be required to govern the use of AI, and how AI may affect data privacy and security policies."
The below link is from 2021 about their approach to Responsible AI:
But they're already rolling out AI for customers, as they happily share here:
And it's looking like blue collar jobs are in line to be automated away; no more Wichita linesmen! At least, in Germany:
"Walk, climb, inspect, note, repair: this is the classic procedure for power line maintenance. Much of this work is done manually and requires a huge amount of effort; therefore, it was sensible to see whether it can be made more efficient and safer using digital solutions. To minimize risks while fulfilling its responsibility to supply households in Germany with electricity, E.ON sends up drones to take photos of power poles and lines. These photos are uploaded to Microsoft Azure and transferred to Grid Vision®, an AI-supported inspection tool supplied by eSmart Systems, which evaluates the images—thus making the maintenance process more efficient and safer."
So while all the talk about AI risk may be "it'll become agentic", in the real world applications, it's about "money". I think the EA risk movement really needs to take account of this, and not get side-tracked by beautiful theory. Value-drivers, not science experiments, in their wording.
I'm looking for collaborators on a learning/education project.
Project: A generative think tank exploring practical applications of AI technologies in education and learning.
Pitch: LLMs make possible a new paradigm of learning/education. The disruptive potential is huge, but not always intuitive. This think tank will explore how these technologies can be applied to: experiential/project-based education, classical education, Egan's imaginative education, Socratic teaching/learning, Erik Hoel's aristocratic tutoring, home schooling, sports education, early childhood education, lifelong learning, etc.
Practical vision for think tank: 1. Start casually as meeting of the minds, brainstorms, and agreeing on scope(s) of work. 2. Perhaps start a collective substack on the topic. 3. Evolve project(s) from there.
Shortcut: In his sci fi novel Diamond Age, Neal Stephenson imagined a version of this paradigm shift and provides an excellent reference/starting point.
Please write to protopiacone at gmail if interested in exploring.
Anyone know of a way I can listen to the collection of Scott’s posts called ‘The Codex’ on LW?
I can load them one at a time into a text-to-speech app. Any improvement on that?
Does anyone have a recommendation for a history of the Catholic Church, particularly the early Church? We’ve got our boys in Catholic school and I think it’s time I get a better understanding of the whole thing. Plus what little I know is fascinating.
Along the same lines, but less important: how about a good book on the Church’s mythology - ranks of angels, descriptions of saints, that sort of thing. Our three kids are nearing the age of D&D and I had a half-formed idea of creating a campaign around the Dark Age church. :) Thanks!
I am looking for arguments against 'moral reasoning' as a way to determine 'moral truths'. If anyone has seen persuasive writing on this, please steer me.
The EA community is based in the Bay Area, as far as I know. Has there been any attempt at local charity solutions - like fixing or ameliorating homelessness in San Francisco.
Inspired by Scott’s language book idea:
-Whenever I’m learning a language I always scaffold my understanding of the language as much as possible by listening to Pimsleur, as a great many people do.
-Pimsleur language programs are STILL seen by many amateur linguists like myself as the best way to learn the most mechanics of a language in the shortest time, despite the fact that they were invented in the late 60s. They were invented, from what I know off the top of my head, as a more user friendly version of the FSI programs the US uses to train its diplomats in languages. (Correct me if this isn’t true.)
-However despite being-as mentioned- some of the best/IMO the best introductions to languages, they were- as also mentioned- invented in the late 60s so surely they could be updated now to be an even more efficacious program.
-obvious way in which they could be updated #1: popular Pimsleur programs (such as French, German,Japanese,Korean) have 5 “levels” (units comprising 30x30 minute lessons each.) unpopular programs (such as thai, Farsi) have 1 “level”. Doing up to level 1 will render you a smart-seeming beginner at a language, doing up to level 5 will render you something like in the low or mid-intermediate level.
But what if the Pimsleur levels had like 10, 15, 20 “levels” instead of 5 tops? Wouldn’t completing that much Pimsleur make you, like, fluent or near-fluent? Presumably Pimsleur just doesn’t have the demand for this to make it a good business move, but theoretically if someone else made a sort of Pimsleur clone in this vein…. rite?
Obvious way the programs could be updated #2: the Pimsleur programs also look dated now inasmuch they are just the listener forming sentences in a very scripted way to have a simulated “conversation”. (I do not explain this well but if you have done Pimsleur you know what I mean). But the simulated “conversation” could become a much more fluid and interesting conversation (spoiler Pimsleur though great is boring) if it were a semi-unpredictable conversation with an AI rather than a scripted one.
Minor issue: The link to "A void" was probably supposed to go to the wikipedia article. As of now, both links point to the comment.
I write stories and novels, and publish them for free. I was heavily inspired by Worm and Practical, and I try to take that sharp grasp of character and system to smaller, more personal places.
My first novel, Last Day Town, takes place in a crater on Israeli-controlled Ceres, where citizens are thrown into the vacuum as form of execution. The condemned are thrown with functioning suits and twenty four hours of oxygen, one every two hours. The story follows Yossi, a journalist that is looking for his informant, and the way he deals with that micro-society.
Idolatry is an anthology of short stories about faith - Lovecraft's Nyarlathotep reimagined with a 4chan-dwelling Incel as the protagonist; A murder mystery taking place in the biblical time of Judges; A dragon's scientific quest to prove she's not the only sentient being in the world.
They are all posted here:
and updated either weekly or as soon as my beta readers get to them, whatever comes first.
I premise that i am not particularly convinced by ai exinction risk and i am much more scared by bad actors (including most governments) using AI.
That been said, genuine question for effective altruists/rationalists. Most of the discussions seem to revolve around llms like chatgpt/claude/llama etc. Aren't military application of AI a major blindspot? I am thinking about companies like anduril etc. Searching for "anduril" on lesswrong or the effective altruism forum for example returns very few posts and some of them are even positive. I expect that the development of autonomous weapon systems is a far more dangerous development than whatever open source llm meta releases in the wild, yet the interest of the rationalist/effective altruism movement seems comparative little.
(Otoh, i expect people which are not scared by ai to answer to this comment defending anduril. I am not interested in a general defence of the company, i do see the value of strengthening military capabilities even if I think autonomous weapon systems are the wrong direction. I am interested in understanding why people which are in general very opposed to/concerned by ai do not seem to be concerned all that much by these developments)
Let's continue exploring the ancient Scott-writings, this time Book Review: Romance of the Three Kingdoms (https://archive.ph/tIGkX or https://pastebin.com/71ZpA0Ee).
The length of this review may shock you, but it was written before rationalist community discovered that all reviews must be at least 30% as long as what is being reviewed and should be treated as a product of its time.
I've always liked the idea of book clubs, but in practice they're difficult to maintain. I think the commitment required to read a full book is a big barrier to this, so I'm experimenting with the idea of an 'Essay Club' over on my Substack.
The focus will be topics similar to the kinds of things Scott covers, as well as more general literary topics (the first essay will be 'Why I Write' by George Orwell). If this sounds interesting to you, please come check it out: https://mindandmythos.substack.com/p/introducing-essay-club
Also, I'm very open to suggestions for future essay picks!
I want to increase my base metabolism, ideally to help me lose weight, or at least make me more happy with my body. So I guess you could say my goals in priority order are:
1. Lose pounds of fat
2. Gain pounds of muscle
For the recent past, I've been focusing on this by adopting a more "bulking" strategy, wherein, I'd use larger weight for my exercises, and try to push my muscles to hit higher and higher weight limits. I'd usually do this by doing 2 to 3 of sets of 12 to 15 reps for each muscle, trying to push myself to muscle failure. So basically, more weight, less reps.
However, for achieving my stated goals, how does the above bulking strategy compare to a "toning" strategy, where I'd essentially be doing less weight, for more reps, and more time. With this sort of strategy, I may be doing up to 5 minutes of reps at a time, but with 1/2 to 1/3 of the weight as I'd be doing for bulking.
Which strategy is better to help me achieve my goal? Or should I do a mix, in which case, what percentage of time should be spent on each?
What are people's who know things about biochemistry and medicine's opinions on https://twitter.com/johnsonmxe/status/1707084107421732911 , an argument for taking nattokinase for general health?
I'm not looking for debating priors around supplements or studies (which have already been discussed here ad nauseum and tend to be really unproductive), so much as people who have some knowledge that bears on the object level question of whether this particular supplement could be good or bad for people.
If for $500 you could somehow acquire the power to reduce the suffering of a million people who lived 1000 years ago without changing the present in any way, would you pay the money?
What if you could pay the same to reduce the suffering of a million people who will live 1000 years from now?
Is there a reason you would do one and not the other?
I suppose it would come down to whether you believe “the past isn’t even past” or, rather, believe the past doesn’t exist because it no longer exists.
But if you believe the latter, then what good would it do to help people living1000 years from now, given that they also will pass and no longer exist?
Optimal gap between pregnancies after caesarean.
Getting pregnant shortly after having caesarian delivery seems to have one particular risk - increased chance of uterine rupture due to the scar not having enough time to heal. This study https://pubmed.ncbi.nlm.nih.gov/20410775/ recorded rupture rates at 1.3%, 1.9% and 4.8% for interdelivery intervals of >24, 18-23, <18 months resp.
The main advantage I see to getting pregnant sooner is general health of parents and the baby (and potential future babies) due to age. I'm sure there are other factors weighing in either direction (feel free to point out any that seem important to you).
However I have no idea how to trade these two off against each other - any thoughts would be highly appreciated!