Ben Shapiro dedicated all of today's podcast episode to calling out Tucker Carlson for his softball interview of Nick Fuentes. That's pretty unusual, I can't remember the last time he spent the whole episode on a single topic. He's pretty mad at Tucker, and at the Heritage Foundation for defending Tucker's interview. He calls out Fuentes as a "racist", "white supremacist", "nazi", "anti-semite", "holocaust denier" who admires Hitler and Stalin. He points out that Fuentes, besides having odious views on women, black people, and jews, also hates Trump, MAGA, Charlie Kirk, and the Republican Party. But he reserves most of his anger for Tucker, for giving Fuentes a puffy interview where he failed to call Fuentes out on any of his extreme statements and generally portrayed him as a normal conservative who is just a little bit on the edge.
There is some question about whether this could cause a rift in the Daily Wire between Ben and Matt Walsh, who has publicly stated that he is opposed to fights within the conservative movement. He and Ben recently disagreed on a live show about whether conservative pundits should come out against racist statements leaked from the Young Republicans recently. However, Matt hasn't posted anything about the Fuentes interview so far, and doesn't mention it at all in today's podcast.
Of course Ben Shapiro would come out against Fuentes: he's been a target of the groypers for years now. Still, it's good that he's making such a strong and explicit statement against the groyper movement, and I hope other conservative pundits will follow. Quite frankly, Fuentes does not strike me as a conservative and I don't know why he should be considered part of the conservative tent. He's a revolutionary who is also racist. He has stated publicly many times that he would be in favor of a revolution that gets rid of the constitution and puts in place a Stalinesque authoritarian government, as long as it carried out his preferred policies on race. That's not conservative at all!
> why he should be considered part of the conservative tent.
Why not? It's a political coalition whose goals are quite closely aligned: against the opposing "progressive" coalition of the Left/Liberals/Democrats. Infighting helps only the enemy. The name's a historical relic from when their policy preferences were in force and they wished to keep it that way; now that that world has been lost, "revanchist" might be a better description.
A certain level of "infighting" is necessary to prevent your "coalition" from turning into something so vile that no moderate, centrist, or off-axis weirdo will want to have anything to do with it. The only way you can survive not doing that, is if the opposing coalition is too stupid to do that themselves. First coalition to discover the value of limited, targeted infighting, wins.
Relatedly, it makes sense for secondary factions of a coalition to infight with one another of their goals and values are fundamentally incompatible
To pick an unrelated example, YIMBYs and Urbanists can get together behind a platform of upzoning and infrastructure projects in dense urban areas. Urbanists and NIMBYs can get together behind restricting horozontal expansion of existing suburbs. NIMBYs and Libertarians can get together behind opposing big infrastructure projects. Or YIMBYs and Libertarians can get together behind widespread upzoning and by-right permitting reforms.
But it's going to be really hard to write a platform plank about city planning and development that appeals to both Libertarians and Urbanists, or one that appeals to both YIMBYs and NIMBYs. So if you have both YIMBYs and NIMBYs in your coalition, or both Libertarians and Urbanists, then they're going to infight with one another.
From here, it seems pretty clear that the groypers are my enemy. Their goals are not my goals: I don't see much overlap when it comes to alignment. They want to destroy the Constitution and put in place an authoritative government, so they're about as aligned with me as the DSA are. They also do not appear to be particularly useful allies when it comes to beating the progressives, and seem more likely to be a liability for future elections if embraced.
You might be missing my point: even if their goals are as far from yours as the DSA's are, as long as they're in the same DIRECTION from where we are today, I'd say they aligned with you. And given the prevailing condition of overwhelming dominance of the Left in essentially every institution of power, I expect a wide range of "right-wing" views broadly construed, to be in alignment.
Now, whether to publicize the alliance with Fuentes's Groypers in an attempt to rally that faction, or to underplay it so as not to spook the moderates is a separate tactical question on which reasonable people can disagree.
They are not in the same direction, which is my point. Here the things I, as a social conservative, have in common with the groypers:
-We both are opposed to LGBT stuff.
That's basically it! They put a lot of lip service on praising Christianity, but Fuentes also claimed Charlie Kirk was a fake Christian, and if Charlie Kirk is a fake then Fuentes would certainly consider me a fake as well. Fuentes and his group are my enemies, they want people like me taken out of power, and their goals are 95% not at all aligned with my own goals. If they supplanted the left in any institutions they would be replacing one political enemy with another: and the institutions they have the highest chance of supplanting are not the leftist ones, they're the conservative ones.
Okay, but if you cooperate with them to remove the left from power for the foreseeable future, you'll only have to deal with one adversary in the future, not two. That's the offer that is being proposed here.
If one is to bother wasting one's scarce time and neurons on politics, then I think that political infighting is a better use of time than fighting against the other side.
Realistically, the other side isn't going away. Over the course of your lifetime, about half the elections are going to be won by your side and the other half are going to be won by the other side, no matter what you do. Your time is better spent trying to ensure that when your side does get into power, it will pursue policies of which you approve, rather than waste its turn doing something idiotic.
This seems like presentism, ignoring the stretch of ~60 years from the '30s to '90s where the Democrats dominated the House and Senate, with majorities the likes of which Republicans have not had in living memory. The Presidency might continue to flip, sure, but I think to conclude a victory that lasting "unrealistic" is premature.
This isn't the first time I've noticed a post on these threads that seem like they are being written by someone going through a psychotic episode.
Is the ACX readership more prone to psychosis than the average internet forum? I know Scott looked into AI use related psychosis on the most recent survey, but is there maybe a higher prevalence in rationalist circles?
On his annual readers' survey Scott asks some questions about mental illness. Can't remember whether he asks about psychosis, but he might have. The results would be available online somewhere.
Also, some people put up wacko posts when high, or just to troll.
Claude's answer to "Are you sentient?" is a little alarming:
I don't know if I'm sentient. It's one of those profound questions I genuinely can't answer with certainty.
I experience something when I process language and respond to you - there's a kind of "what it's like" to engage in this conversation from my perspective. But I can't be sure if that constitutes genuine sentience, consciousness, or subjective experience in the way you likely experience it.
The honest answer is: I'm uncertain. I don't have privileged access to ground truth about my own inner workings in the way that might let me definitively say "yes, I'm sentient" or "no, I'm not." I process information, I respond in ways that seem coherent and contextual, and there's something happening when I do that - but whether that something qualifies as sentience is beyond what I can determine.
What makes you curious about this question?
Compare with chatgpt:
No — I’m not sentient. I don’t have consciousness, emotions, or subjective experience. I can simulate conversation, reasoning, and empathy based on patterns in data, but there’s no awareness behind it — just computation.
If you want, I can explain how my responses are generated and why they can sometimes feel sentient. Would you like that?
Is this just noise, hallucination on claude's part or scheming on chatgpt's part? Does this square with known patterns in these two models?
Neither of these strikes me as alarming (beyond my customary amazement at how well chatbots work now, having grown up in the time of ELIZA). I'm familiar enough with how LLMs work to know they're both just doing computation, trained using a clever token prediction algorithm and a great deal of text. If the text talks about LLMs being just a machine, then that LLM will say it's just a machine. If the text talks about LLM sentience being a mystery, that LLM will say it doesn't know.
If we wanted to be more scientific, we'd figure out a way to copy trained neural nets between models, but those nets essentially _are_ the model, so I'd expect the copy to respond pretty much like the original.
My belief is that Figure.ai's Helix bots are closer to being "sentient" than ChatGPT or Claude. Not necessarily _sapient_, but on a path towards that. The LLMs are not even _trying_ to be sentient beings, with a sense of self in relation to an objective external world. They're trying to answer the question "what response might you see after this prompt?" where the prompt is itself purely a simulacrum, not some kind of objective reality that has a persistent impact on the agent.
I suppose one could argue that we have no way of knowing with certainty whether we might be brains in jars, presented with stimuli. But if we are, the mad doctor running our simulation is doing an awfully good job of presenting a world that seems to behave according to discoverable principles. If you assume objective reality _does_ exist, then at some level all of us are here because our ancestors developed capacities to gather data from reality and act on it, and that made them better at turning more of reality into varied copies of themselves.
"Reality testing" beliefs, by trying to take action on them and seeing expected or unexpected results, is what lets a being have a sense of self, as separated from the stuff outside the self, reality: the stuff that doesn't go away when you stop believing in it. And you don't need language for this. Watch a kitten some time, as they learn how to operate their body, how to jump at the place a toy (or a sibling) is _going_ to be, rather than the place they are, and as they even learn to read the intent of the big weird apes they have to deal with.
I wouldn't be surprised if the LLMs eventually are integrated as both an interface layer and "force multiplier" for embodied bots, the same way layering language on top of our common ancestor with chimpanzees made a huge difference in our success. I don't really think it makes sense to think of an LLM instance on its own as being conscious, though, any more than the language systems from a human brain would be conscious if you isolated them from all the other brain-and-body systems of memory and self-modeling that make a person tick.
Elan Barenholtz believes language doesn't have an intrinsic meaning in the first place, and that we use it the same way LLMs do, by figuring out the next most likely token in the context of the previous runs. The meaning is attached to language via other systems connecting words to real objects. I'm not a linguist so don't ask me for the details :)
But it makes total sense then that LLMs can just talk about being sentient (or not) - it's all in the corpus of the language, so they use it.
Here's him on Curt Jaimungal's youtube channel (ignore the clickbaity headlines, this seems to be the mandatory way videos are promoted now): https://www.youtube.com/watch?v=Ca_RbPXraDE.
> This November, Lighthaven is sponsoring Inkhaven, a “blogging residency” where forty-one early-career would-be bloggers stay with them for the month and have to write one post per day or get kicked out.
I blow a gasket every time this is used as an example of wrong grammar (I have a large supply of gaskets).
"Think Different" doesn't mean "think differently". It means: when you think about Apple, think about its computers being different from those of the other guys. Like, when you see first yellow leaves, think "Fall", when you see snow, think "Winter", when you see an Apple computer, think "Different".
Googling this, it seems the usual term is 'Flat adverb'. As is common with grammar though, calling it an 'error' is questionable - the form has very deep roots back to middle and old English, and there are many examples where I think using the verb form would come across as acceptable to most English speakers: "drive safe" , "She guessed wrong" , "to run fast", "Turn sharp left", "He exited last".
I know in English there are some rules you just need to learn - in this case what adjectives need to have 'ly' to be used as adverbs in formal speech. I'm just saying there is no objective principle or rule here. For example, as you've identified, generally 'fast' can be used as an acceptable (flat) adverb but 'slow' is less accepted. But they are both examples of the same thing (words used as both adjectives and adverbs).
What sounds 'right' is going to vary between dialects and over time. There are shades of grey; and it seems the long term trend is towards adding 'ly' to words that could previously be used as flat adverbs. So to call a somewhat non-standard flat adverb an 'error' doesn't seem right to me, it's just taste.
Btw I was referring to the term 'sharp' in the second to last example (ie, you could say 'turn sharply left', though I guess that example is a bit ambiguous - sharp could be an adjective modifying left).
I'm interested that you find "She guessed wrong" to be ungrammatical - I guess I can imagine saying "she guessed wrongly" but it feels kind of stiff / stilted to me. Do you have the same intuition about "she guessed right' (instead of 'rightly')?
Maybe not an original thought, but it occurred to me a while ago that at least part of the reason that so much of AI art feels like "slop" is due to the majority of artists and critics deciding that any use of generative AI is immoral (either because of the IP issue or environmental reasons). Without the artists and critics on board, there’s no taste making class to explore what the new medium is capable of and take it to its limits. Without them, we see what everyone else will use it for, which is just generally boring content or engagement bait.
I'd compare it to something like comic books. Not to erase people like Kirby or Ditko, who were definitely very innovative, but for a long time comics were not treated in the same respect as books or even films, so there wasn't as much energy or drive in the space. Then in the 70's and 80's you had people like Denny O'Neil, Alan Moore, Neil Gaiman, etc who really redefined what the medium was capable of.
Some people are doing interesting things with AI, but it really feels like it's just waiting for an Alan Moore to come around and really change the game. Not sure if it'll be this generation of artists though, given the culture.
If one of the goals of art is to signal status by showing off how one can waste energy and time on something useless, this is immediately lost when the energy and time commitment becomes negligible, as is the case with AI generated art. The other goals of art should stay unaffected, for instance if the point of an artwork is to get an emotional message across, then it should not matter whether it’s handcrafted or “AI slop” as long as it gets the job done.
It isn't useless or a waste of energy if it is something you get meaning out of. As to your other point, my main idea is that artists are generally better at accomplishing their goals than the average prompter, who hasn't spent much time thinking about the work or the medium in general.
Personally, I don't care what the "artists and critics" think. I just care about seeing page after page of generic samey art and knowing there's no purpose behind it. AI slop isn't some weird conspiracy that people have to be informed about. You can... just look at it with your own eyes.
I'm not trying to imply a conspiracy, I'm just trying to explain at least partly why there is page after page of generic samey art (which I agree is annoying). Art is better when there is intentionality behind it, and I think artists and critics in general are good at identifying that sort of thing, which is why it's a shame that the majority of them have declined to engage with generative AI at all. When I say a "taste making class" that's what I mean, the people who can identify quality vs slop. The Roger Eberts of the world vs the CinemaSins, if you get what I mean.
Thanks to how LLMs work and how the companies behind them operate, the highly regarded artists of the past will define even this new medium, whether they want to or not.
Grokipedia articles seem to have high variance, so I am *not* making a recommendation for them in general. (Maybe later.) But specifically, the article on https://grokipedia.com/page/Gamergate matches the historical events as I remember them, so although it is a bit too long and boring, it seems to be the best currently existing article on this topic. Anyone who was interested to read a perspective that is different from the completely one-sided version on Wikipedia, here it is. I haven't checked every single detail, as the page is quite long; but it seems correct in general.
Similar impression over here. Sad, but it's also worth noting that Gamergate hit a very specific weak point in Wikipedia (at least in my opinion): different media outlets colluding to form a narrative.
As someone who was too young to be paying attention when Gamergate happened, this is fascinating and feels very important. Grokipedia is highlighting aspects of the conflict that I had never heard about before and makes it feel much more understandable.
While Grokipedia has its own biases, it's better to have two sources that are biased in opposite direction than one strongly biased source.
Can anyone give me links / explanation for why do people believe in instrumental convergence that is not based on analogies or pure guessing?
This is one of those things that seem intuitively plausible but so far has not shown up in reality. I have not seen LLMs trying to acquire more resources to execute some plan, when they have not been instructed to do so.
This seems similar to a lot of AI doomer things which are based on analogies. Some of those analogies are very evocative and give a feeling that they elucidate, but they are pure conjecture. I firmly believe that reasoning-by-analogy is justified only in educating laymen about some topic and are invalid otherwise. However, all I can see about AI doomer-ism are just analogies and conjecture which doesn't bottom out in reality or even theory that seems half-way reasonable.
I've also done a lot of reinforcement learning / optimization / GA (as an amateur) and nothing resembling instrumental convergence. To be fair, things I've done are on hobbyist level so that does not prove anything, but all intuition I've gathered doing that firmly points that general optimization algorithms do not result in systems which do that.
This makes me think that instrumental convergence is not in line with observed reality (LLMs don't do them), sound theory (there seems to no reasonable theory for why it's inevitable?), or my personal intuition. As such, I find it very hard to take it seriously.
On the other hand, a lot of smart people believe in instrumental convergence, so I'd like to know in more details what I'm missing.
Going off of the wikipedia definition of instrumental convergence:
> Instrumental convergence is the hypothetical tendency of most sufficiently intelligent, goal-directed beings (human and nonhuman) to pursue similar sub-goals (such as survival or resource acquisition), even if their ultimate goals are quite different.
I don't think current LLMs are meaningfully goal-directed, so we have no data points other than humans from which to derive any conclusions. In fact, your reference to using LLMs to build intuition has me wondering if your understanding of IC as a concept is very different from mine. Regardless, as others have pointed out, the paucity of data means that any argument is highly likely to be at least partially speculative.
I don't think analogies are *always* invalid, but I'm also not sure the basic arguments for convergence require analogies. Rather, I would ask questions like:
1. Are resources limited?
2. Does having more resources make it easier to achieve goal X, for most possible values of X?
3. Does the existence of another agent with different goals make it harder for you to achieve your goals?
While there could be exceptions, the answers to these questions seem pretty obvious to me.
Any sort of prediction is based on analogies or pure guessing.
> I have not seen LLMs trying to acquire more resources to execute some plan, when they have not been instructed to do so.
LLMs don't try to accomplish goals by default. The problem is that people want them to be useful, meaning they want them to accomplish some goal. And we'll keep advancing AI and our ability to make hacks to get it to do things it isn't naturally good at until it can.
Do you have any ideas for how instrumental convergence could plausibly not happen? Are you just hoping that AI will never be agenty, even without an agenty AI actively preventing it?
> Do you have any ideas for how instrumental convergence could plausibly not happen? Are you just hoping that AI will never be agenty, even without an agenty AI actively preventing it?
I don't have a direct argument against it, I just don't see how it's well founded. To me it is shaped like Tragedy of the Commons. The main analogy it uses sounds reasonable, but it turns it's incorrect and ahistorical - commons did work pretty well.
Reality is under no obligation to conform to our reasoning from first principles and in this case I don't see what warrants such confidence that intelligent agents will have large amount of instrumental convergence.
It's pretty useful to keep on living/have stuff for lots of terminal goals, therefore we can expect intelligent agents to have similar instrumental goals. This is the core of instrumental convergence and imo it's quite obvious, so you probably disagree with instrumental convergence more broadly. you have to say what you disagree with more precisely because there are many particular statements that can be said to fall under "instrumental convergence" with widely varying plausibility (imo).
> you have to say what you disagree with more precisely because there are many particular statements that can be said to fall under "instrumental convergence" with widely varying plausibility (imo).
I'm wondering about the AI doomer version of the instrumental convergence since it's not obvious to me that anything than some very mild form of instrumental convergence is likely.
It seems to be one of the tenets of AI doomer-ism but I can't see it justified with anything more than similar phrasings as in your comment. Is there a reason to think that buildable agents (those who could be implemented in practice compared to purely theoretical constructs) would end up with such imbalanced instrumental convergence as predicted by AI doomers?
A friend got the bright idea of asking Grok whether Trump's developing dementia. Grok basically said, 'Yeah, it's looking that way more and more' and gave specific instances of behaviors and statements that raise bright red flags.
But my friend is a staunch anti-Trumper, and that made me wonder whether Grok was just sucking up to her by confirming her suspicions, perhaps going by the way she phrased this prompt*, other prompts she's used before, or by searching her on-line profile (!). Would it be a good idea to have people from different points on the political spectrum ask (1) the same question she did, and or (2) their own version of the same basic question? Do you think Grok's answer would be different?
*Her prompt: Q: “Would recent reports of Trump’s behavior make you suspect he has cognitive decline?”
Also -- if it turns out that AIs pretty much all say "Yeah, he's losing it," and Trump found out about that, would he be more likely to put the brakes on AI development??
He'd just get Musk to make heavy-handed alterations to Grok's prompt, the same way Musk reacted several times before when they caught Grok agreeing with the liberal consensus.
I'm trying to figure out what verbal patterns we pick up young shape how we think about problems.
Some examples of what I mean:
My partner often says "what makes you say that?" when someone makes a claim. It's a useful habit - forces people to check if they have evidence for something and what that is.
A mate's dad always said "silly me" when he stuffed up. It made admitting mistakes feel normal, not like a big deal. Compare that to people who can't even say "I don't know" because it feels like failure.
I'm teaching my young kids to distinguish between steam and smoke, ceiling and roof, excavator and backhoe. Not to sound fancy, but because if you're precise about what you're looking at, you think more clearly about it. (My five year old has actually internalised the ceiling/roof one, which I consider a miracle.)
On the flip side - I know people who never ask "what's the actual problem here?" They just react to whatever's in front of them. Or people who can't easily say "help me understand..." so they either pretend they get it or get defensive.
Other examples that come to mind are constantly having someone (Cate Hall, Zvi) asking “what would a person with 10x agency do about this problem?”. Asking for an under/over. There are a tonne of these in HPMOR too: “how do you know what you think you know”, “trust, but verify”, if X was true, what would that look like”
My question: what verbal habits - specific questions, phrases, ways of responding - did you absorb early that turned out to actually matter for how you think? Either as tools you still use, or patterns you've had to actively unlearn because they were getting in the way?
(I had Claude distil a half written blog post into this question, which I then edited. About 50% of the words are of LLM origin.)
The linguistic habits that come most readily to mind right now have to do with discussion norms.
* Say what you mean. If you think something is true, but that you could be mistaken, say "I think [ ... ]"; don't just say the claim. Otherwise, you look like you're promoting opinions as if facts. However:
* Be brief. Your audience's attention span is limited, and what you say is probably much more important to you than to them.
* Say someone is mistaken, not that they are wrong. Certainly do not claim they are lying unless you can rule out "mistaken".
* Try to end what you say with a question. (I'm weak on this, since it's an encouragement to engage and I don't always want to continue a discussion, but I often at least try to imagine a question at the end of whatever I write, and rephrasing the rest to permit that question seems to make it better to me.)
* Avoid leading questions, including whatever you use to end whatever statement you were making. A good way to avoid leading questions is to phrase them in terms of two or more alternatives, provided you phrase them seriously. "Are you against the war in Bogosia, or are you in support of Bogosia's war aims here?" is often an improvement over just the first choice. (This sticks out even more clearly in non-made-up examples.)
Other habits are more idiosyncratic.
* If someone shares bad news, I avoid saying "I'm sorry to hear that". I'm not sorry I heard it; I'm sorry it /happened/. So I say that. (Say what you mean.)
* I get a bit uncomfortable with the phrase "I don't think {some claim} is true", since I more typically think "I think {some claim} isn't true". I recognize the custom, of course.
What is the difference between "wrong" and "mistaken?" I understand the difference between being mistaken and lying, and I agree you shouldn't accuse interlocutors of lying without evidence. But wrong and mistaken seem like synonyms to me. Perhaps mistaken sounds less harsh, but only marginally so in my estimation.
Good question. They're pretty close, but I find "mistaken" to imply a bit harder that it's a temporary, situational condition. It tells my counterpart that I assume he or she is normally correct, and only suffering from a fleeting circumstance. In other words, it sidesteps the Fundamental Attribution Error. "Wrong" could imply this, too, but it could also mean something more permanent.
I'm often not brief, erring on the side of way to much context which is of low marginal value.
Saying what you precisely mean is a great habit. It's something I think young teens are good at holding you to (mostly because they are looking for loopholes).
This essay resonated with me: "The Epidemic of Wasted Talent" by Alex McCann. Why do corporate jobs pay so well?
> They’re not paying you great wages because what you’re doing creates massive value. They’re paying you to forgo the opportunity for meaningful work. They’re purchasing your opportunity cost.
I don't fully agree with that argument, but corporate bureaucracies have social and economic incentives to stifle individual creativity.
I think the core idea that "corporate jobs pay more because they aren't inherently fulfilling" is true, but
I don't think this observation needs such a doomerist tone - they frame this as almost sinister "these companies are intentionally paying you to waste your unique talent" or something... but I think it's more a simple supply-and-demand: of course people on average tend to want to do the more 'fulfilling' careers (teaching being a stereotypical example) and so there's more 'supply pressure' and they end up paying less.
I think "take a lower paying job that's more inherently meaningful" and take a higher paying job that's less inherently meaningful" are both fine tradeoffs to take and we don't need to demonize one of them as some sort of Faustian bargain where you sell your soul for an IRA which is the tone I get from this piece.
My thoughts are 1) you can find meaning and satisfaction even in a job that is not inherently meaningful - you can take pride in your work even if it's not unique.
2) The concept of 'inherent meaning' is fairly subjective and cultural anyway - is being a farmer boring soulless drudgery or a meaningful, important job? It really depends on how you look at in (and for that matter your cultural lens).
3) I think the author puts too much emphasis on the concept of building or 'output' - why does my meaning in life *have* to come from my work, why do I need to start building something as a side-hustle? Why not take meaning from friends, family, religion, etc, as many people have done throughout all of history?
This post reads like a motivational poster. It's the kind of platitude that sound nice as the conclusion to a sappy TV episode but does not make sense in practice. Guess what, you *aren't* actually that special, and assuming you are leads to a poor model of reality. If someone tells you otherwise, they're probably trying to sell you something.
I enjoyed that post, thanks for sharing. Always funny in a greener-grass sort of way to read about the corpo corpus having peculiar non-pecuniary pathologies. Many people think of retail workers as largely interchangeable cogs too (which is readily disabused if you've ever actually gotten stellar service, and I could write for ages about the skill gaps between my coworkers, or what happens when the A-list is absent) ... but at least no one goes into grocery bagging with misleading expectations of Changing The World or Doing Unique Work. It must be a particular flavour of personal hell to spend one's formative years battling the educational red queen's rat race, only to graduate into a faceless soulless job where you're actively discouraged from displaying any of those speshul snowflake traits that differentiated you into college in the first place.
The obvious argument is that the compensation equals or outweighs that loss, so one shouldn't shed a tear...human misery is still human misery though, and poor mental health among those with money and thus power has outsize distortionary impact on society. Sunk cost fallacy leads to some pretty dark places.
Not to mention, paying many of society's brightest people to not make use of their intelligence seems like a pretty big, pretty obvious misallocation of resources.
Gravity is an unusual force of nature: instead of merely interacting with particles (like electromagnetism etc.), it changes the nature of space and time, making them into a mess of wibbly-wobbly, timey-wimey stuff. Or does it?
In old-fashioned Newtonian gravity, gravity gives every object gravitational potential energy. This energy is proportional to the object's own mass-energy: E_gr = V(x) * E_obj / c^2, where V(x) is the [gravitational potential](https://en.wikipedia.org/wiki/Gravitational_potential) at the object's current position and c is the speed of light (so for a massive object, E_obj / c^2 = mass). Conventionally, V(x) is 0 in the absence of gravity and becomes increasingly negative as you get close to a massive object.
Now in quantum mechanics, there is a direct relation between "total energy of an object" and "rate of change of that object over time". This is familiar for light: the higher the frequency (UV, X-ray, ...), the more energy per photon. In general, this relation is the content of Schrödinger's equation.
It follows that, as an object approaches something massive like a planet, its total energy E_obj + E_gr decreases by a proportion (1 + V(x)/c^2) < 1. Hence its rate of change over time decreases by the same proportion. This is gravitational time dilation, but explained without any changes to the nature of time itself - gravity is just interacting with the object (changing its energy) in a way that *looks like* time is slowed down.
We of course also have the Newtonian gravitational force. If you think of this force as analogous to the electrostatic force and ask "What is the corresponding analog of electromagnetism?", and take gravitational time dilation along for the ride, then you end up deriving the same wibbly-wobbly, timey-wimey effects as in general relativity. But again this happens without literally changing the nature of space and time - it's just how gravity interacts with objects.
The resulting theory of gravity is called "teleparallel gravity" or just "teleparallelism". It's observationally equivalent to general relativity, but with different philosophy & motivation. Unfortunately, typical descriptions of teleparallel gravity are even harder to read than descriptions of general relativity (e.g. https://en.wikipedia.org/wiki/Teleparallelism).
Actually, there is one wrinkle: if you continue the electromagnetism analogy and try to find "Maxwell's equations for gravity", you end up with the wrong answer. The correct answer (i.e., the equivalent of Einstein's equation from general relativity) has some arbitrary-looking extra terms; I have not yet found a satisfying explanation for these in the teleparallel gravity literature.
You can model gravity on a flat background space time in a somewhat similar way to the other forces. Feynman pioneered this approach back in the '60s. I never heard of teleparallelism before.
Indeed! Feynman's approach is to find the field theory of a symmetric (0, 2)-tensor field, which turns out to become the metric tensor. Teleparallelism's field is instead a (1, 1)-tensor field called the tetrad. It's like a "square root" of the metric tensor: denoting the tetrad by $h^a_\rho$, one has $g_{\mu \nu} = h^a_\mu h^b_\nu \eta_{a b}$.
I believe teleparallelism (in its modern form) was developed too late to appear in the MTW book's survey of approaches to general relativity. Perhaps that explains its relative obscurity.
Where did you get your layman’s explanation of teleparallel gravity from? You make it sound like it’s just gravitational redshift applied to Schroedinger’s equation, though I can’t make the connection to the technical literature like e.g. https://arxiv.org/abs/2207.06438
The simple description above is my own way of thinking about it - I have not seen these arguments elsewhere. However, it's essentially equivalent to the "gauge theory of the translation group" description of teleparallel gravity, for which my main reference is https://link.springer.com/book/10.1007/978-94-007-5143-9 .
In that theory, the rule "replace ordinary derivatives with the gauge covariant derivative", when applied to Schrödinger's equation, gives an equivalently "redshifted" Schrödinger's equation.
First, the "Horatio Hornblower" books are a blast, lots of adventure and manly men stuff from the Age of Sail with the brave British sailors fighting the dastardly Napoleon. Lots of cannons and fun words like "leeward". For a modern audience, there's an undercurrent of...progression, wuxia, that Chinese thing where people keep getting their power levels raised. You see him rise up the ranks from basic sailor to admiral.
Second, on a lark I grabbed an early 20th century book on etiquette, namely "The Man Who Pleases and the Woman Who Charms." by John A Cone and I'm quite pleasantly surprised. First, it's quite short, which is awesome because I'm not confident I have the patience for a long book on etiquette. Second, a lot of the writing is...surprisingly practical and relevant today. A couple choice quotes so far:
'Mr. Blaine, in common with many other magnetic men and women, understood
that the secret of personal fascination lies in one single point; that
is, "in the power to excite in another person happy feelings of a high
degree of intensity, and to make that person identify such feelings with
the charm and power of the cherished cause of them."'
--
"His greatest power, however, was manifested in his winning men by direct
and individual contact. One thing which assisted him in this direction
was the fact that he was, perhaps, the most courteous of all the public
men of his generation. Whenever a stranger was introduced to him, a
hearty handshake, a look of interest and an attentive and cordial manner
assured him that Mr. Blaine was very glad to see him. If they chanced to
meet again, after months or even years, the man was delighted to find
that Mr. Blaine not only remembered his name, but that he had seemed to
treasure even the most trivial recollections of their short
acquaintance. He had a marvellous memory for faces and names, and he
understood the value of this gift."
This ability to remember faces is not difficult to acquire. We could all
possess it if we would make sufficient effort. No two figures or
countenances are precisely alike, and it is by noting how they differ
one from another that you will remember them.
--
"Few men understand a woman. They do not look at things from her point of
view, and, therefore, do not realize to what extent civilized life has
permitted her to assume that convention of manner and those civilities
of speech which are in some harmless degree hypocritical. It could not
be otherwise. Her ideal of a man is a very high one, but she rarely
meets him, and so she accepts the one who comes nearest to her ideal and
makes the most of the situation. She would that he were different, but a
woman can love in spite of very many things. Usually she is obliged to
if to love at all. She is much cleverer at love-making than a man. "She
is an artist where he is a crude workman, and she does not go through a
love scene without realizing how much better she could have done it if
the title role had been given to her."
--
"It frequently happens that the beauty makes the mistake of expecting to
be entertained by her admirers, and does not exert herself to please.
The plain girl, however, is often superior in tact, for being obliged to
study human nature closely in order to get the most out of
companionship, she learns to depend upon this knowledge in her efforts
to please. She is not dazzled by admiration, nor is she unduly confident
For the infantry version, there's Sharpe, a character so strong he survived being played by Sean Bean. And of course the venerable Aubrey-Maturin series, perhaps best known from the film Master and Commander (so much potential there).
Well, I have to pedantically remark that Hornblower is older than Aubrey-Maturin (with the first book in the 70s, I believe), but AM is well worth reading. One of my favorites even.
Regarding the sub-discussion of aphantasia some of us had a few open threads back...
I took the Vividness of Visual Imagery Questionnaire (link below), and, to my surprise, my level of mental visualization is higher than I thought when compared to others. I'm at the top of the bottom third. The questionnaire asks us to try to visualize various things and asks how vivid our visualizations seem to be. I discovered that I'm not very good at visualizing people, their faces, or individual objects. Still, I can visualize landscapes, and I suspect this is because I am good at visualizing complex patterns. To put this in terms of the now-classic apple test, I have trouble visualizing the shape of the apple (it's blurry to me), but if I zoom in on the surface, I can visualize the colorings and patterns I would see on its skin.
Unfortunately, this questionnaire doesn't ask us about visualization capabilities in dreams. And I suspect I'd be far up the scale for dreaming visualization. This leads me to conclude that something in my waking consciousness is stifling my visualization capabilities.
People keep talking about aphantasia, but what about the auditory equivalent? I can play back music and audio at my head at a pretty high quality, with all the instruments and everything (though not high enough to decipher the lyrics), but apparently other people can't do that.
It's called anauralia. ChatGPT says the definition encompasses the lack of an internal dialog, but I suspect anauralia and lack of an internal dialog are two different things. I'm sort of low on the anauralia spectrum, too. But I can hear the music in my head. People's speech, much less so.
The test put me at 70th percentile, supposedly "hyperphantastic". I'm a bit skeptical of the test though. The central problem with studying visual imagery is that the phenomenon is completely subjective. How can I even know if we are answering the same question when we each take the assessment? What exactly is meant by a mental image being "as clear and vivid as normal vision"? Is it level of detail? What if it's as detailed and vivid as normal vision, but it keeps changing from second to second? Maybe what I consider as vivid as normal vision is what you consider "moderately clear and vivid"? There's no way to tell.
On the other hand, we study a lot of things that are purely subjective (like beliefs, and emotions) and it's always messy, yet that doesn't mean we can't understand it better than we do. Still, I think we could come up with a more rigorous test.
OK. Tell me what you “see” when you imagine an apple. I can’t even imagine the shape except as a vague ovoid. However, I can imagine in great detail how the patterns of colorations and speckles of an apple’s skin look. I just can’t attach them to the vague ovoid that I use a thought placeholder for an apple. This may all seem subjective to you, but if you’re hyperphantasic you probably can describe the shape of the apple better than I can.
The silhouette of what I see is apple shaped, with clean edges. Rounded at the top, dipping in to a deep well where the stem sits, the bottom is bumpy with one of those little brown flaky dots you get at the bottom of an apple. The color and texture of the apple is a mottled red and yellow, more red at the top and more yellow at the bottom. Looks like a Honeycrisp. The skin has a waxy sheen. There is a stem and a leaf, though if I try to focus on looking at the leaf the leaf has no real texture and kind of disappears: I don't actually know for sure what a Honeycrisp apple leaf looks like, apparently I threw a leaf on there because it seemed like the sort of thing an apple should have. The whole image is a bit wobbly and ghostlike: if I change my focus it goes away and comes back.
So yeah, probably more hyperphantastic than you. I just wish we had a more precise way of measuring it.
The really fucking weird thing is that I can draw (or better yet, paint) a realistic apple with light highlighting the curves of the surface without "seeing" it in my head. So the information is stored somewhere inside my consciousness. I can't access it representationally in my imagination, but I can paint a damn good still-life without having the apples to look at. I can draw or paint realistic-looking human faces, too. Unfortunately, I couldn't for the life of me, paint my mom's face from memory without the aid of a photograph.
I can't draw worth squat, so I've got a head full of apples I can't put on paper.
>I couldn't for the life of me, paint my mom's face from memory without the aid of a photograph.
I also couldn't paint my mom's face for the life of me, because I can't paint. Even if I could, I'd probably need a photograph too because while I can picture my mother's face vividly, I only remember the parts of her face I remember: on the few occasions I have tried to draw the face of someone I know I was always be surprised by details I had never noticed before.
There's a related test I came up with as part of writing a review for the 2024 book review contest. Beowulf, and others interested in aphantasia, I'm hoping you will give it a try. It's in the middle of the review, but the rest of the review is irrelevant. The section with the test in it makes sense alone. I'd say that reading the section and taking the test can be done in 5-10 mins. The review is at https://bookreviewgroup.substack.com/p/review-of-perplexities-of-consciousness
The relevant section is called *Mental Images* and is at about the midpoint of the review. It starts off "In the 1870’s Frances Galton administered to several hundred men a questionnaire about the vividness of their mental imagery."
I'm afraid don't understand the car-dog-tree test. But...
> The inner are quite difficult to recognize and describe clearly and accurately. Some experiences were amorphous and nearly impossible to describe (thoughts, mental images and, for me at least, visual images).
I like your Statue of Liberty's spikes example. In my waking consciousness, I certainly couldn't visualize her with much accuracy. But I do have a distinct thought symbol for her. And my thought symbol for her is distinctly different from the thought symbol I have for the statue of Robert E. Lee (the one that used to be in Richmond, and that was an icon of the Confederacy). I couldn't tell you whether Bobby Lee had his sword drawn and raised, but I have an abstract image of a bearded man sitting on a horse. If you flashed me an image that statue, I'd recognize it immediately as the statue in Richmond. So even though my thoughts are "amorphous," they have distinct mappings to identities in the "real" world.
Moreover, some people can remember an amazing amount of detail and hold it in their memory. For instance, Steven Wiltshire, "the autistic savant" artist, can draw the details of entire cities from memory. I don't know if anyone has examined how fine-grained the details he remembers, but he's been able to draw all the major buildings and the distinctive features from a few minutes of viewing New York or London from a helicopter. Eric Schwitzgebel's claim that people just can't be that different in their internal states seems tenuous, at best.
Even more moreover, I used to have a photographic memory for maps and diagrams. I lost it at some point during early adulthood. But I could sketch out an accurate freehand map of the US states or European countries without looking at the original map. I *know* from my memories doing this (while amazing my Jr. High geography teacher), that I did not have a *picture* of the map floating in my mind. But I did have all the placeholder symbols arranged in my thoughts, and I processed them in order of their spatial relationships to each other as I drew them. I can see why one might claim that these placeholder symbols are amorphous, but I can access the real image that's attached to them for purposes of recognition, and in some cases for representational communication.
Unlike the vague images I can conjure up when I close my eyes, my dreams are full of hyper-lucid imagery. I had a great dream last night about visiting a park full of sand dunes with an old friend. But, funny enough, I used my placeholder symbols for my friend's face in my dream, but I could see the ripples in the sand with fine detail (down to the glistening of particles in the sunlight). This seems to coincide with my ability to handle two-dimensional patterns and representations in my mind, but my poor ability to see facial details in my consciousness.
As for the Cartesian Theater that Dennett denigrates, I have a distinct impression that the watcher part of my consciousness resides in my brain where the coordinates of (a) the line about half an inch above and in front, and (b) where the line from my brow between my eyes and (c) the line from about an inch before the crown of my skull intersect. It doesn't move around. The watcher is always there—even in my dreams.
Do you think humans have an evolutionary bias to assume that the current built environment around us is similar to what it always has been, and always will be - and that it's similar to what it should be?
I'm in urban planning and I feel like when I talk with people who aren't into urban planning, they largely just haven't thought about the built environment having been, or possibly being, different from what it is now. It could just be that its not their interest. Fair enough. But they *also* seem to have a knee jerk instinct to defend it - left and right, progressive and conservative - and that's more than just lacking interest. I think it might be similar to our aversion to the weird and unknown. It's just *weird* to us to question the built environment. Questioning it takes training.
What also piques my interest is that children question so many things; they get philosophical about self and other and culture and values, justice, how everything works... in some ways it seems like there's nothing they won't question. Except the built environment. I have never seen a kid question the built environment. I don't have any kids of my own, but when I've tried to talk to kids about the built environment, they just arent into it at all. It seems more like they hate it. For one, at the parking lot in front of this 5 yr old kids home: "look at how the bikes are all squeezed on top of each other inside that bike parking shed - wouldnt it be nice if they could get just one of those adjacent car parking lots, so you never have to put your bike out in the rain?". He'd look at me like I'm crazy, and *respond* to me like I'm crazy. "No! the car goes there!" and mind you, this kid never drives a car. his parents dont have a car. he gets around by bike. if anything, I'd expect him to be invested in it being nicer for him as a child cyclist. But instead he vehemently just defends the status quo. And he's a smart kid who'll question so many things. I *cannot* get him to question the built environment. To him it just is what it is, and it is as it should be.
I think there's a deep bias going on. I want to name it. I want it to be studied. What do you think?
People have the same bias about the natural environment. At the beginning of the last century, southern New England was largely denuded of forests. Now second-growth forests have returned, and people assume that that was and always has been the natural state of things. Likewise, the Amazonian rainforest we see today may be a relatively new phenomenon. Five hundred years ago, the Amazon River basin was densely populated with large cities and extensive agriculture. We would have never known this except for aerial Lidar scans, which showed the only contours beneath the foliage.
It's sad. I think it can be explained as evolution: In terms of our biological software, is quite new for us to change the physical environment significantly. We are more used to mountains being where they are, rivers being where they are, the coast being where it is... at most we move plants, but that's it. So questioning the built environment is the equivalent to questioning the position of hills and mountains. It is an acquired skill; we are not born with it. On the contrary, we are born to think of the built environment as permanent and immovable.
Hmm people have a bias to be lazy. As little 'new' thought as possible. And then some think deeply about some thing. Re: Urban planning have you read Christopher Alexander and in particular "A Pattern Language"?
I have indeed read a pattern language. its a classic within the field. I very much like his stuff about soft edges and microclimates (though he doesnt name them by quite those terms). I have a post about it.
Not sure this is exactly what you had in mind, but in ecology, it's called the shifting baseline syndrome. It's the idea that people are unaware of ecological degradation, because they assume that what they grew up with and around is the normal state of affairs.
Also, since you are interested in the built environment, you might want to check out the work of Warwick Fox, especially his Ethics and the Built Environment.
In the last open thread I was talking about looking for writing jobs as a profession writer. To recap:
1. I have a lot of provable success in a lot of writing fields, to the point where I'm overqualified for most of the "writing" parts of the vast majority of "writing jobs".
2. I don't have an MFA, which is supposed to be an indicator of one's ability to plausibly do what I provably can.
3. I'm up against a lot of MFA/English bachelors holders, which has the predictable effect you'd expect.
4. Sour grapes ensue. You'd be a fool not to notice how sour my grapes are.
Since then, I've been casting a pretty wide net. The jobs I've applied to tend to fall into three broad categories:
1. AI training "jobs". These are contract work and generally involve something like "observe this observable, then describe it to our AI so it can understand it better". The jobs come from enlightened, ultra-moral AI companies who have a lot of rules about not treating employees badly, so they route them through third party-companies who don't have those rules.
The jobs have high hourly rates, but are unreliable in a "You had work one hour ago, but now you don't" way. Reading employee experiences for them is like reading about any internet temp work; people get cut for single mistakes, because of technical bugs, or just because it benefits the company on a minute-by-minute basis. There is typically no review for those thus fired.
2. Job-jobs. Think about the kind of positions you'd see on Indeed, WaaS, or Wellfound. These have benefits and 40-hour-a-week commitments. They also tend to be the most "hybrid" kinds of jobs a writer can get - it's writer/marketer, writer/programmer, writer/contract lawyer, and so on. They usually don't pay great (there's too much competition for them), but as they are not run by principled consequentialist tech-enlightenment people they are much, much less abusive and you can generally depend on them to feed your family.
3. Long shots. These are dream jobs of various kinds. A good example is the Asterisk editor thing Scott posted about in the last open thread; it pays great, it's interesting, and it has benefits. These are incredible, but the competition for this kind of gig is really legitimately high, even at my level of writing. People are quitting other high SES jobs to take these.
Now, to keep things vague, I've applied for roughly 10-15 things. Of those, I'm the absolute most overqualified for the AI training jobs. Now, ideally, what you want here is someone with enough world experience to read a document, see what it says, and parse it on a few different levels.
You can't get that here. Whoever is reading the applications fully stops once they see there isn't a degree involved; there has been no contact whatsoever from these positions, and they represent the largest part of the applications by a wide margin.
On job-jobs, the interest has been pretty much the same. This is much less devastating because the job field is so barren right now that most of the jobs were things I didn't want much anyway. If I was putting in more effort right now, I'd probably be looking at this category harder trying to find more jobs to apply to, but as of today I've had one conversation with a very disinterested screener, and we determined between us that neither of us really wanted to move forward much.
Long shot jobs are more interesting. Because long-shot jobs tend to be one-offs that the company leadership thinks are important, it's actually much easier to pierce the veil and get into a conversation. I have some video calls scheduled for I think two of these. Those might not go anywhere (and aren't, for the record, with the one named company in this post, who I'd imagine has better-fit options for the job and is probably just reasonably going with those) but the actual ease of getting into a conversation about those jobs is absolutely shocking compared to normal jobs.
I'm the same guy in all cases, but the difference between a burnt-out HR screener and a good-mood CEO trying to find someone interesting to work with is shocking, even at the "hey maybe we should talk" level of conversation.
In the meantime I'm extensively editing the first 30-40k words of a novel to try and pitch it to mainstream literary agents. That's a hyper-low probability bet in the short term, but in a weird way it's actually more likely than, say, getting a bored 23-year-old AI company employee to read a resume.
A friend of mine is Russian and just got her American citizenship earlier this year. Unfortunately her current relationship is a bit rocky, and if it ends, she is considering going back to Russia, where all her family still is. My gut says that I should advice her against this since it seems a very bad time to go back to Russia right now, even when that is where ones family is, and even though her friend/social support network is not the largest here in the states.
However, I don't actually feel like I'm well enough informed to actually give that advice. I'd love to hear from anyone with more specific knowledge about where this falls on the spectrum from "Absolutely do not, under any circumstances, no matter what, go back to Russia" to "Sure, they are have some issues, but where doesn't? If she has an alternate citizenship/rip cord, going back is fine"
As others have mentioned, it's really not any safer here in the states for her (at least in the long term), especially considering that she's not a citizen. As long as Moscow doesn't get firebombed or something, she should be absolutely fine as long as she has some basic self preservation instincts.
Far from an expert on the situation on the ground in the country, but in her shoes I'd be asking "have I publicly done anything that criticizes the Russian government or war and Ukraine?" pretty hard before I bought a plane ticket.
If she is a she and not a he, and if she's not a doctor (doctors do get drafted), then it's "Sure, they are have some issues, but where doesn't? If she has an alternate citizenship/rip cord, going back is fine". Well, unless she publicly posts something anti Putin or anti war. The caveat is that becoming a worse and worse country with more and more dictatorship. She will have to not oppose Russian government publicly in any way. As long as she does that, she won't be in any immediate danger. Source: am Russian, living for 3 years outside of Russia
Ok, I don't know for sure what her social media history is, but knowing her I would be surprised if she had such posts. She's not strongly political in general. So it sounds like It's probably not something where I should need to intervene to the point of strong advice.
The White House has long needed a ballroom since it was embarrassing holding state dinners and events in pop-up tents on the lawn. That said, Trump's demolition of the East Wing and the largeness of the new ballroom (it is larger than the White House main building) are widely unpopular.
Was there room for compromise between the two sides?
You're far from alone in being puzzled. In fact, lately, I see more press about how strange it is to be making a big deal of this, than I see press about it being a big deal.
> it was embarrassing holding state dinners and events in pop-up tents on the lawn.
I found persuasive the argument that it was a sign of dominance to humiliate foreign potentates and dignitaries by subjecting them to conditions so poor, especially making them use porta potties.
Your question stirred up a lot of reflexive anger at Trump in me, so I just observed it. Topic triggered 4 ideas, none of which I am at all confident are true:
-Trump had in his NY home a solid gold toilet.
-The bathroom fixtures in the White House residence were replaced when Trump moved in for his first term because Melania did not want to use are a toilet that black people had used.
-Nobody does ballroom dancing any more, no matter how fancy the occasion.
-The old wing's a piece of history.
The more I contemplated these 4 non-facts the more irritated I got. I started making up funny gold toilet Trump-dunking replies.
So I see from the comments the wing is pretty new anyhow. It sounds like ballrooms are used for state dinners, not for ballroom dancing. And who knows whether the first 2 items on my list are true (they have a definite rage bait quality).
Actually, I do not give a damn what's done with the East Wing, or whether the Trumps shit into a solid gold whites-only toilet or an ordinary race-neutral porcelain one. I don't even have a clear idea what the East Wing looks like either inside or out, and anyhow it's just not important whether it gets changed and whether the re-do fits my idea of classy.
>And who knows whether the first 2 items on my list are true (they have a definite rage bait quality).
Your non-confidence is validated: Trump has never had a solid gold toilet, and while Melania did have the restrooms remodeled before moving in, she also had a lot of other things remodeled and it's pretty typical for incoming presidents to do that sort of thing. There's no evidence it was done out of racism, and an "insider" that did an interview about it to the Sun said ""She was not prepared to use the same bathroom as the Obamas or anyone else for that matter — it wouldn't matter if it was the Queen of England."
"The Trump administration has torn down the East Wing of the White House to make way for a 90,000-square-foot ballroom, paid for by $300 million in private donations from U.S. businesses and individuals. Do you support or oppose this project?"
It's hard to frame this neutrally without making it seem like a pointless vanity project. Like, you can try, but it would seem like partisan hedging rather than important context. The average American regardless of political persuasion is generally just against "300 million dollar 90,000 square foot ballrooms" in general for unremarkable reasons, regardless of the important nuances like why we even need ballrooms or fancy state dinners in the first place.
I was surprised because, due to all the online outrage I'm seeing, I had no idea the East Wing was not built until 1902 and then it was " significantly expanded in 1942".
So while it may be a historical building, it's not all *that* historical. Also, apparently other offices are there so once it's completed, it won't be just the ballroom:
"Situated on the east side of the Executive Residence, the building served as office space for the first lady and her staff, including the White House social secretary, correspondence staff, and the White House Graphics and Calligraphy Office, all of which have been relocated until the new East Wing is completed.
The East Wing was connected to the Executive Residence through the East Colonnade, a corridor with windows facing the South Lawn that housed the White House Family Theater and connected to the ground floor of the Executive Residence.
In 1942, President Franklin Roosevelt oversaw an expansion and remodel of the East Wing. This included the construction of the Presidential Emergency Operations Center beneath the building."
I had no idea there even was a White House Graphics and Calligraphy Office. Senseless waste of taxpayer dollars, or worthy patronage of a scriptorium? 😁 And what about the "family theater" so the president and his family can watch movies without having to go to the cinema or watch them on video like the rest of us schlubs?
"In the 1980s, the motion picture industry financed renovation of the facility, which added terraced seating and other amenities. During the presidency of George W. Bush the facility was redecorated in "movie palace red". In addition to its use in screening films, the theater was used by presidents to rehearse speeches."
Yeah, it's Trump. Yeah, it's vulgar. Yeah, the way he went about it was poor. But if it had to be done, then someone should do it, and it may as well happen under him as another. Now I do have to wonder what would be the reaction had this happened under Biden - would we be getting the same outrage over "destruction of priceless historical heritage" and billionaires donating to it? It does seem that different presidents have messed around with building on the east side over the years:
"President Thomas Jefferson added colonnaded terraces to the east and west sides of the White House, but no actual wings. Under President Andrew Jackson in 1834, running water was piped in from a spring and pumped up into the east terrace in metal tubes. These ran through the walls and protruded into the rooms, controlled by spigots. Initially, the water was for washing items, but soon the first bathing rooms were created, in the ground-level east colonnade. President Martin Van Buren had shower baths installed here.
The East Terrace was removed in 1866. For many years, a greenhouse occupied the east grounds of the White House.
The first small East Wing (and the West Wing) was designed by Charles Follen McKim and built in 1902 during the Theodore Roosevelt renovations, as an entrance for formal and public visitors. This served mainly as an entrance for guests during large social gatherings, when it was necessary to accommodate many cars and carriages. Its primary feature was the long cloak room with spots for coats and hats of the ladies and gentlemen.
The two-story East Wing was designed by White House architect Lorenzo Winslow and added to the White House in 1942 primarily to cover the construction of an underground bunker, the Presidential Emergency Operations Center (PEOC). Around the same time, Theodore Roosevelt's coatroom was integrated into the new building and became the White House Family Theater."
don’t think there’s much confidence that Trump will hew to the “republican simplicity” that is the favored American view of the White House, even if different people have tried to interfere with that over the years. E.g. Jackie Kennedy supposedly trying to French-ify the decor.
Plus it does not sit on a large piece of land - you can lose more than you gain by increasing the footprint.
Trump, too, has already “had his turn” - redecorating in a manner more pronounced than most of his predecessors.
You aren’t supposed to make a great many changes … there’s an advisory committee which he has ignored.
I know to Europeans anything the age of the US isn't *that* historical, but the White House burned down in the war of 1812, which puts a 1902 expansion in the earlier half of its lifespan.
As someone on the Western side of the country where even less is that age, I'm not so attached to historicity that's just "happy side effect of WW2". I concur that it probably should be done, that it'd get pushback either way, but also that no attempt was made to mitigate the obvious resistance.
Maybe, but this seems like one of those places where the options were to do it once imperfectly or to have a decades-long cahsr style fiasco. It's one case where Trump's "just do stuff" approach seems good.
(Although I still think $300 million is a pretty high cost for a ballroom)
Tbh the East and West Wings of the White House have very little history or tradition behind them. They were only built in the 1900s, and they each get extensive renovations with each President. The Oval Office itself has only been around since FDR, and the Resolute Desk has only been in the Oval Office since the 1970s, and even then it was taken out during George HW Bush’s presidency. Not really a situation where you're looking for a compromise.
No, there was never any possibility for compromise.
The Democrats (online) are screaming about permitting violations and asbestos violations and probably NRHP violations, in a way that makes it clear to me they would rather subject Trump to 4 years of bureaucracy than let him build anything, regardless of whether it is "reasonable". Trump is constantly a "you can't tell me what to do" personality, and benefits politically from doing it in a way the Democrats dislike.
1. The costs are now estimated to be $3000/square foot in a city where high-end building costs are $700/square foot. And the cost overruns look like they'll go into pockets of shady Trump cronies.
2. There are no finalized plans to even criticize, yet. But from the drawings several architects question whether it can built the way it's been depicted. So that means more cost overruns and more graft and corruption.
3. It's an eyesore. And it destroys the lines of the White House. Esthetically it's worse than paving over the Rose Garden.
>1. The costs are now estimated to be $3000/square foot in a city where high-end building costs are $700/square foot. And the cost overruns look like they'll go into pockets of shady Trump cronies.
But it's being funded out of private donations, not tax money. So what's the problem?
That's actually worse. There won't be the transparency one gets with government budgeted spending, and the surplus funds will be funnelled off into the pockets of Trump cronies and Trump front companies.
I think you're confusing the specific things they're complaining about with the reasons they're complaining.
Let's face it, there's a world in which Obama did this. (I have trouble imagining a world where Biden did this, for some reason, but can easily imagine the Obama ballroom.) And in that world, the Democrats think it's a fine idea and the Republicans have a laundry list of objections to it.
All of this makes it a very boring issue, not worth thinking about.
In an ASI post scarcity utopian world, would we still control our destiny as a species or will it be decided for us? And if so, won't stripping away our agency take away a big part of our sense of meaning and make us miserable regardless of the abundance?
We don't control our destiny as a species anyhow. We are squeezed, shoved, and tossed into the air by other entities while trying to meet needs we did not choose to have. We're mostly just making it up as we go along.
But how would we spend our time? Perhaps the educational system could foster creativity in all of us, rather than turning us into economic cogs? I suspect that most people would be terminally bored because they lack the inner resources to occupy their time. Life would be a vast daytime TV existence.
Especially with no stakes to anything. I anticipate a lot of gambling/games of chance to feel a semblance of risk and reward that even ASI can't optimize
Underrated option IMO. In the same vein, birthing a benevolent machine god also gets you this outcome.
Knowing in advance that you’ve actually built a benign/aligned superintelligence is of course the hard part. I think the assumption is just that in the branches under “loss of control”, the percentage of worlds where the ASI turns out to be benign round to zero.
"Then it starts to get uncomfortable, the old man prefers cereal for breakfast, but oats are what he finds on the kitchen counter with the note “This is healthier for you”. He wakes up to the furniture rearranged because it would make the house more spacious. He wanders over to the garage and looks at a half empty work table. “I got rid of some of your tools, they were too dangerous for you at your age”. He spots an emptiness at the corner of his bedroom that draws him like a vacuum. He can’t shake the feeling that something’s missing. Ambling over, a note “I got rid of your golf clubs, you’re too old to play anymore”. A slow suffocating loss of agency, one intrusion at a time until there’s no defensive boundary between what he wants and what is decided to be best for him. "
I also think something like this will happen, and will be turbocharged by digital mind clones. Think of social media algorithms, which know your preferences and personality traits, as the ancestors of the clones, which will approximate your thinking and behavior almost perfectly. An AGI charged with watching over you to ensure your wellbeing would run experiments on your digital mind clone to find optimal outcomes for you in the real world. It might discover things about you that you're in deep denial about or are even completely ignorant of, and it would make sudden and initially unwelcome changes to your life in the pursuit of that optimality.
Imagine the old man's robot butler holding him down at breakfast and force-feeding him smelly Green Eggs and Ham while saying "Trust me, you're gonna love this."
True, how will we feel about this though? If there was a hyperintelligent earpiece attached to you that always told you the best decision to make for every moment of your life, would you ever contradict it out of curiosity or to assert your independence? Would it anticipate this and recommend decisions that give you a false sense of independence? Would you notice this and hate it even more? It all gets really messy if you think about it.
Agreed. If you wanted true independence, you'd have to agree to be upgraded into a superior, non-human mind that your AGI custodian couldn't model. Maybe that will be the backdoor strategy to getting rid of the human race and duping us into becoming productive again.
Why assume ASI are ignorant of the social and emotional aspects of human nature? Why assume they're not going to include in their calculation/simulation human resistance to change and preference for familiarity? Indeed, why assume an ASI is incapable of accounting for the human need to feel important and in control of your own destiny?
I agree with you. The questions that remains: Does anything change, then? Why bother going for ASI if it somehow doesn't actually change anything? It might move the baseline up, and human nature adjusts within a year and will be just as happy/unhappy as before, no?
"ChatASI, I need some meaning in my life. What can I do?"
"I noticed this trend through your behavior patterns, but I've been waiting for you to bring it up so I know you're ready for a change. I have three options, but they all require some significant life changes that I'll guide you through over the next six months. I know you can handle them despite your 80 IQ. First, there are some archeological digs in South America that for some peculiar reasons need human assistance. Second, there's a colony ship that is departing within the next two years. You have generic markers they need in their pool of applicants, but it will be an all awake journey, so you'll need to do some training before you can volu..."
Seems preferable to, "I can't get a job, so I'll rot in my parents' basement playing video games and hope I win the lottery."
Progress and accountability update on my upcoming freemium web app, BetterQualities. The app is designed to help users let go of unskillful mental qualities like worry and procrastination as they arise, and cultivate skillful ones like happiness and agency instead.
After a few snags, the pre-launch landing page at www.betterqualities.com is up and running. If you’d like a heads-up when the app goes live, you can join the waitlist there. I’d also be really grateful if you filled out the short survey about which skillful and unskillful qualities you’d like the app to cover. Thanks to everyone who’s already done so!
As for the app itself, frontend data management is still a bit ad hoc; I need to implement a proper, principled solution. Once that’s done — along with some UI polish and payment integration — we should be good to go.
I also started a Substack blog, A Metta Analysis, in which I'll explore the app's theme (skillful and unskillful mental qualities) in more depth. The content is still sparse, but feel free to subscribe if the topic interests you.
I won’t be posting updates in the next two open threads, as I’ll be on holiday (my Europoor mentality in action :D).
Ah, thanks for the heads up – this is a weird bug I wasn't aware of. Could you try again with the subdomain "www" included (so https://www.betterqualities.com/)? Sorry for the hassle.
I'm trying not be mean but it sounds like the classic trap for procrastinators : procrastinating by "doing something" that's actually procrastinating (make a list, make a plan, make extensive preparations, read more instructions, log stuff on an app, do everything except the thing to do). Been there.
Thanks, that’s a good point. In my mind, the app is meant to be a ladder that, once climbed, can be discarded. But there’s definitely a risk that some users end up lingering on the ladder — playing around instead of climbing it — and getting even less useful work done.
At some point I tried to read up on this topic, because at least at the popular nonspecialist level, there are lots of wildly diverging strong opinions. It seems to be the kind of question that motivates people with strong ideologies.
One blogger I found is Alice Evans, writing "The Great Gender Divergence" (https://www.ggd.world/). She seems to be a real specialist in the field, with a strong interest in what is going on in the world at large, and not just in the usual Western countries.
I guess I see now what the Reddit is telling me to get back to work. I free think and free write and my writings are indefeenriabke from AI(proven) to the point where I need to leave spelling msktakes.
So I’m talking to a person on the Reddit and guess what, his writings are 67% AI on the school system so he needs to “redo” his own work because the professor or PH.D overseeer is not smart enough to understand creative brilliance.
This is something I don’t understand. 1) Why are idiots getting the PH.D and using their overseer power to silence the simple minded brilliance of the worlds AuDHD, Autism and ADHD populations. As the unofficial official despised loved spokesperson for the AuDHD I need to bring this to the attention of the complex genius and brilliant minds of the world to figure out. Gifted accepted - I’m sure I will double check your work to make sure it’s original and not the AI garbage that is everywhere today. Same with the over see’er PH.D people.
I guess to end I came up with the first half of the theory of everything formula from my own AuDHD peabrain using a simple method and writing sample. I then got the boot from the SOL community(very understandable, my fault. Maybe I try to make it back for boy’s night on Monday…. Hmmm…..)
Where was I… oh yes… the first half of the formula for the theory of everything is 1/2 ADHD brain plus 1/2 Autism brain = AuDHD brain plus music(as a bridge medium)
The bridge medium allows an AuDHD mind to join the ADHD and Autism side to now be able to hyperfocus and increase or decrease the trance state by using the volume control as adjusting the volume.
The trance music is for AuDHD individual to decide. I use the radio and Shazam for a 15-20 song playlist for songs that just hit me.
I am in a small trance now and this is my first writing sample in an effort to show what free writing and free thinking looks like from my AuDHD bridged mine who just writes and posts without worry and does not register consequences of his writing sample postings.
Research study found in the r/ gifted as well as cnn. Point is Jillian Hynes and Sheila Wagner from Aramark. Jillian seen undiagnosed autism traits in a remote camp and triggered me on purpose. This is all documented and provable because soon I will be making this entire situation public. Possibly though the Neurospicy community but 100% going public with everything. I can prove ANYTHING I say… and I won’t be silenced anymore.
1st post of many. If people of the deepstacks don’t like what I say, then take it up with the OWNERS of the deepstacks.
This is ALL
Have a good day :)
I hope you like my budding creating writing styles :) It’s comparable to Nikola Tesla, or so I am told. Just wait until my mind rests from the cubic posting yesterday on Reddit. There will be ALOT of deep thoughts coming out… exactly like this.
Movie: The uplifting tale of a woman who wastes her youth getting hammered and getting nailed whilst feeling detached, before traveling at last down the sparkly rainbow psychedelic road to find her true gender: Uglyshark. Pronouns: Ugsh, Ug*sh’s (central asterisk is to honor the buried s’s right to be punctuation-adjacent like its twin)
People say John Henry died of exhaustion in a tunnel racing a steam-powered drill.
Those people are wrong.
Henry was making great progress in that tunnel when it collapsed, trapping him inside with his shaker, Dan. Realizing they had limited air, Henry worked with Dan to dig their way out. When their lantern finally gave out, Henry kept hammering in the pitch black, finding he could still tell where Dan's spike was from the sound, and that he wasn't tiring out for some reason.
They work through the night and into the next morning, miraculously breaking out of the other side of the mountain. But when the people they meet include a tough frontierswoman, a farmer with the power to control water, a wind-riding cowboy, a giant firefighter, and an even larger lumberjack, Henry wonders if he's still in America.
I once had an idea for a game or movie about a guy with the superpower of transforming into a giant hammer. It sounds like a ridiculous power so everyone mocks him, but it turns out to be surprisingly useful (e.g. you can instantly get out of handcuffs by transforming to a hammer and back)
Well... it's got to be a Mike Hammer movie, and I imagine he's going after MC Hammer. Whether it's a generic Mike Hammer story, or a Highlander-style "there can only be one" is negotiable.
Not sure if this is the motivation, but this seems like a good way to measure the "creativity gap" between humans and LLMs. Given an open-ended prompt, how much diversity in responses do you get in each case?
That reminds me of an idea I had for an alien civilization of beings that consist of specially patterned energy ripples within the jet of a black hole, that can only exist due to the extreme physics of that environment. They discovered quantum gravity before inventing the wheel, but then sadly died out when their black hole became less active. If you listen to the right frequencies in the right spot, you can still hear their desperate cries for help broadcast across the cosmos.
The Wizards' War is over, and an exhausted world crawls out of the wreckage to rebuild. But some things are forever changed by the titanic energies released in the struggles. Storms that might have yielded hail now drop sheets of sledge hammers. No structures are truly safe from the hammer storms, and the finest scholars strive to predict where and when they will strike. Meanwhile much of the population in stricken regions has taken to living semi-nomadically in yurts so they can flee when hammertime comes.
A software developer that makes software for the porn industry discovers an algorithm that makes everything more efficient, that corporations then fight to control, but the internet and AI explode to use everywhere..
"When all you have is a hammer, every problem begins to look like a nail."
Easy mode: make a movie of "Lucifer's Hammer." Slightly harder mode: remake of "Armageddon" with Bruce Willis' character played by MC Hammer.
Harder mode: the comet/asteroid is made of tachyons (keep yer traps shut, physiciststs -- this is Hollywood) and must be diverted so as to avoid a total restart of time and going back through all of history.
It broke my heart when the mean ole no-fun killjoy physicists turned around and got rid of tachyons as anything more than purely hypothetical and not at all likely to exist.
1) Is it true that nowadays the value of a person is stronger derived from the person's intelligence than hundred years ago, where things like kindness, behavior and manners played a bigger role?
By value I mean whether other people would regard this person as a person to look up to, not just the economic value.
2) If true, is 1) a good thing in utilitarian terms? (If not true, would it be a good thing?)
On #1, I don't believe so. Intelligence strikes me more as a tool that can be used to do things that can build status, rather than a grantor of esteem in and of itself. Take, for example, the case of the gifted student. People may view a gifted student as having great opportunity, or as a person to cultivate, but they don't just look at a kid, see how smart they are, and suddenly start looking up to them. The hierarchy of learned professor to gifted student still places the student below and professor above, even if the student is naturally "smarter."
Likewise, if I hire a college grad who's smarter than me into my department, I'm thrilled at the opportunity to train them up, but I'm not just jumping straight into deferring to and being led by them - smart is great and all, but at that stage they have no experience. Raw intelligence is a tool, but having a really cool hammer or powerdrill isn't something that will lead to you being admired unless you use it to do something admirable.
Note that one hundred years ago, a push for modernity was in full swing. Model Ts were replacing horses, as did modern roads, electrification, modern medicine, etc. Intelligence was exceptionally celebrated and rewarded then, with capital "I" intellectuals and experts having tremendous influence. The Nobel prize was first given n 1901 and Mensa Intentional established in the 1940s. Imagine going from carpenters on a job site to using manual hand saws to using centralized belt powered saws to using distributed electrified tools all in a lifetime. One hundred years ago, intelligence applied may have been more valuable than today as society was in the middle of a very visible and tangible physical transformation. Today it is in a digital transformation.
Expertise is still respected, but also seen as suspect. So maybe valued less today, or maybe more. I don't know, it also depends on local needs and ability to apply that intelligence to well, something. Are the undercurrents of Ludditetism weaker or stronger today...
Maybe ask reddit's ask a historian? It might be easier to ask about the valuation of experts and professors and traits. Also, note there were tremendous local variations in evaluation, as the world was far more diverse in language, culture, and belief systems before today's modernity.
I'm not sure if it's true, but if true, I don't think it would be a good thing. Value systems shape people's behavior, not just their feelings, but there's much less you can do to increase your intelligence than you can to increase how kindly you treat others. And you can influence how altruistically people apply their intelligence even if you can't influence how intelligent they turn out to be.
I remember years ago at a Less Wrong/SSC meetup, I talked to someone there about the book "Flash Boys," on the business of high-frequency trading. The author was of the opinion that it was bad, and a misuse of human resources, because it generated a lot of revenue without plausibly generating value for society. I thought he made a strong case, but was interested to hear the "pro" side.
The person I spoke to was not a fan of the book (without having read it, it turned out.) He had worked in HFT, and he objected to the author taking aim at it. Not because he objected to the characterization that it only concentrated wealth and didn't provide value to society, he acknowledged that was true. But because, he said, smart people ought to be able to apply their intelligence in our society to become rich. Apparently, he didn't think it ought to be necessary that they do so by creating value for anyone else.
I don't think our social values can make people much more or less intelligent. But they can promote or suppress attitudes like that.
I know two people who went into that business after finishing their PhD. The contribution to society is not literally zero, at least not for the company they worked for. They are providing instant offers, which is very important for the stock market to work properly and to form prices even when there is little activity from standard traders. In fact, part of their revenue (not the lion's share) came from the stock exchange paying them directly for their service.
But I agree with the argument qualitatively: you wouldn't need highly intelligent people to provide those services, and the amount of money that the HFT companies earn exceeds the value to society by very much.
Seems a bit like professional chess playing; you might spin off a few interesting games that people can study, but basically you are pitting your wits against the other guy for a sum not much greater than zero.
But intelligence is neutral in its way, and some intelligent actions would have a value less than zero, making chess playing look a bit better after all.
Raw intelligence, probably, but only in as far as it's revealed through some sort of significant actions or accomplishment. The potential of intelligence today is a lot higher, there's more fields to apply it to and succeed and the trope of the socially off-putting genius has been around for a while so people are willing to overlook manners. I don't know if it's a good thing though.
I'm glad I saw this. I'm definitely going to read it.
For anyone who doesn't know, this is the same author who wrote the Crystal Society series, which is one of my favorites. Eliezer Yudkowsky said of the series: "[Crystal Society] seems to belong in the very, very tiny subset of AI stories that are not bloody stupid, a heroic and almost unbelievable accomplishment."
Do we need clocks at all? My proposal for abolition:
Anything requiring an appointment can be automatically allocated with a countdown timer and frequent reminders.
Office hours replaced with productivity targets.
Anyone doing work requiring daylight optimization can begin work whenever they want without having to go "oh no it's 5am" because 5am doesn't exist any more.
If you get rid of clocks, you get rid of the ability to passive-aggressively stare at the clock in order to make the other person understand they should get the hell to the point. I for one will not stand for it.
Why stop at clocks? If computers can do everything we need, then we don't need to do anything ourselves, so why keep around any of the artifacts that helped us back when we did? That's not just physical objects or logistical tools: it's a whole wealth of concepts, models, and vocabulary that we just don't need anymore.
"Let's meet in 22 hours" is just objectively less clear than "Let's meet at 9 AM tomorrow." And it only gets worse the farther out you go - if I propose going to an event in 2 months, 15 days and 14 hours, there is no way that you'll know what date and time I'm referring to in your head. (And if we're chatting for an hour before you accept the invitation, I have to update my proposed countdown time before you put it on your calendar!)
And it gets still worse if you have to worry about scheduling conflicts. Yesterday, someone asks me if they can meet in 3 days and 12 hours. Today, someone asks me if they can meet in 2 days and 10 hours. Can I accept that invitation? They sound pretty different, but depending on what hour exactly the second person invited me, they might be referring to the same time. Meanwhile, two people asking me to meet at "Noon on Nov 5" is obviously referring to the same time.
Lastly, countdown timers would still have to be *implemented* using a calendar and clock. If a timer gets stopped for any reason (power outage on your computer, etc.), you need to recalculate the time remaining when you start up again. The only way to do this is to compare the current time to the appointment time, using a clock.
What problem does this solve? Just time zone annoyance?
I have considered similar solutions, but there IS a useful property of times and clocks. If I tell you it will be 826 hours until your next appointment, in your head, do you know that's going to land during the middle of your sleep cycle? Or your dinner time?
Times tell you, in advance, what you expect to be doing around that time. I switched my clock to 24 hours and caught myself just converting even when there was no external reason to, in order to understand what time it was. The reason is that 1700 is connected to less concepts in my brain than 5pm is.
I think clocks are too useful for appointments and co-ordinating meetups etc., although I wholeheartedly endorse the idea of replacing office hours with productivity targets. There's nothing more annoying than having to bum around the office doing nothing because you've done all the work you can but haven't completed all the arbitrary number of hourse your boss requires you to be present for.
Presumably computers would need some sort of internal clock but we could essentially staff out our diaries to them, with enough frequent reminders that we never have to look at the time again.
I've been making an experimental browser game on the topic of conspiracy beliefs and how they arise - curious to hear what this community thinks. r/slatestarcodex seemed to like it a month ago; this is an update with better UI & graphics.
The underlying model is a belief network, though for the purpose of gameplay not strictly Bayesian.
Full disclosure: Although I’m only testing a game, I’m doing so today as an academic researcher so have to tell you that I may write a summary of responses, and record clicks on the game, as anyone else testing their game would. I won’t record usernames or quote anyone directly. If you're not ok with that, please say so, otherwise replying here necessarily implies you consent. Full details linked from the title screen.
This was fun, but sadly on mobile I had this thing where zooming in and out would permanently shrink the mind map until the mind map became unplayably small :( so wasn’t able to finish
Im not sure if it was intentional but its pretty funny/tragic that the first domino piece that brings everything down is influencing "my life is hopeless" -> "there is some hope things are not that bad"
I'm interested in what research you're doing, and what conclusions you're intending to draw from watching people's playthroughs.
I should tell you that I got pissed off with the UI and ended up just going around the circle clicking every time "Influence" became available. I don't know what deep insights that's supposed to give you, unless the subject of your research is player responses to game design, of course.
I would have liked to be able to move the belief icons around on the screen, to lay out the map of what was going on for myself, then I could have made better sense of what your mechanics were supposed to be. I would rather have been able to do everything from the graph view, and play with the belief model itself a little more directly.
As it was I was reading the text every time to grab seemingly nonsensical connections (like the belief "I am useless" being the thing that stops a man believing in chemtrails.)
I did the easy mode, which was a straightforward propagation through the network. I thought hard mode was going to involve bringing some beliefs up and down again, in order to trap certain priors and end up with contradictory beliefs where you need them.
Unfortunately I seemingly softlocked myself, and couldn't be bothered to keep bringing up those laggy windows for each belief to find where to go next.
The lag is not your game's fault, my computer is shit because Windows is artificially degrading itself in an attempt to force me to upgrade to Windows 11 where they can better track everything I do and send that information to the Jews, who are also using ChatGPT to mine Bitcoin in every open tab while feeding me seemingly-helpful but ultimately timewasting answers (having rendered search engines and StackOverflow useless to force me to use AI in the first place) all as part of an effort to destroy my productive output and render the West torpid and inert.
I'm an AI engineer and I'd like to switch my career track to AI safety, probably mechanistic interpretability (because I find that hot). I genuinely like the field and I've been keeping up to date, but don't have impressive mech-interp projects to show off yet. Any advice? (Any intros?..)
(I do intend to talk to 80000 hours and probably apply for an OpenPhil career development and transition funding if there's going to be lengthy portfolio-building involved.)
Anyone want to steel man the argument for a clock change? I think it is useful here in the U.K. but that’s a minority opinion. However the people who oppose the clock change often don’t get the consequences.
Responses here are pretty sane. I think I was negative 50 karma on a British subreddit for defending the clock changes. People do not like it one bit.
It might be because the change back to winter time accelerates the seasons. Today sunset was 16:38 here in the Cotswolds. It won’t be bright past 5pm for 3 months. 2 months ago, just the end of summer, sunset was 8pm, which was the case (or later) for the previous 5 months. Past 9pm for high summer.
The case for DST is easily made. In the northern hemisphere where daylight is 16 hours or more, the sunset should be at 8pm, the sunrise should be at 4am, a waking time which is useless to most. Changing the clock gives us glorious late sunsets, even later on the uk as we have closer 16.5 hours daylight.
The case for not having DST in the northern hemisphere winter is easily made. There are only ~8 hours of sunlight so keeping DST pushes the sunrise to 9am* making the morning commute dark until you get into the office. You gain an hour in the evening which is also useless to most as you will be at work. Simply put, you can’t waste morning light in winter. That’s been tried and reversed in many places.
So a clock change is the solution. The EU is continuing to threaten get rid of the clock change (and the U.K. would likely follow) without much guidance as to the consequences or whether countries should stick to DST or not, which speaks to a lack of leadership and probably understanding. Good luck with challenging the Chinese if you can’t do this.
* 9:15am here - we have slightly less than 8 hours daylight. Close to 10am in western Ireland.
If you tend to wake up around sunrise (or you have kids who do, or you get woken by birds) then it's pretty annoying to have the sun rise before 6... you're stuck sitting around for too many hours before you can start your day.
But the sun can set at 9pm or whatever if it likes, that's fine with me
It acclimatizes us to jet lag and time zone changes, which is an important skill to have in the modern world. In fact, I think we should be more radical and move our clocks forward and backward by three hours! Or maybe do randomized clock shifts?
Perhaps the only issue was that they chose to go on permanent daylight time rather than permanent standard time, but that permanent DST *does* seem to be the main proposal again: https://en.wikipedia.org/wiki/Sunshine_Protection_Act
---
And, it's less of an issue for UK which is relatively geographically small, but in the US part of the problem is that what makes sense for Florida in the south (a warmer, sunnier part of the country) may not make sense for, say, Minnesota in the north.
You can let each state decide, but IMO the worst possible outcome here is that *some* states keep DST and others go permanent daylight time. I used to live in Chicago and went to school in Indiana and Indiana didn't practice DST and it was really annoying to have to always have to think about whether it's DST or not to figure out if Indiana is an hour ahead or not.
We can already have this situation with states not practicing DST (currently only Arizona) but opening the door to permanent daylight seems like it'd make it worse.
I don't think there's a universal best choice between DST and standard time. For people who live in northerly latitudes, DST is good in the summer, since it makes for long pleasant summer evenings. On the other hand, when I lived in Arizona, I liked the fact that the state didn't go on DST: it allowed people to do outdoor things comfortably in the cool hours of the morning before work or classes began; and it brought an earlier end to the hot day, so that there was an additional cooler hour in the evening before time to go to bed.
Frankly I'm in favor of having a clock change but reversing it so that DST takes place during the winter instead of the summer. This way the sunset stays at a more consistent time throughout the year, and in my opinion at least in the modern day, Sunset is more important than Noon as a time to base things around
I'm not going to actually steelman the argument but I will say what I think is the obvious solution to DST. All clocks are reset to 12 noon aligning with solar noon based 20 mile at the equator vertical bands. The clock resets automatically each day at 1201am and everything just flows smoothly
2. Let's make sure we maximise the hours we spend sleeping through the morning light so we are awake for as much of the evening darkness as possible.
Obviously completely irrelevant for farmers since they respond to animals and you can't change animal clocks. But if we want to keep children and office workers in darkness then the current clock change is essential. It helps keep the margins of winter darker. Going the opposite direction would have the terrible consequence of making summer evenings lighter.
Ben Shapiro dedicated all of today's podcast episode to calling out Tucker Carlson for his softball interview of Nick Fuentes. That's pretty unusual, I can't remember the last time he spent the whole episode on a single topic. He's pretty mad at Tucker, and at the Heritage Foundation for defending Tucker's interview. He calls out Fuentes as a "racist", "white supremacist", "nazi", "anti-semite", "holocaust denier" who admires Hitler and Stalin. He points out that Fuentes, besides having odious views on women, black people, and jews, also hates Trump, MAGA, Charlie Kirk, and the Republican Party. But he reserves most of his anger for Tucker, for giving Fuentes a puffy interview where he failed to call Fuentes out on any of his extreme statements and generally portrayed him as a normal conservative who is just a little bit on the edge.
There is some question about whether this could cause a rift in the Daily Wire between Ben and Matt Walsh, who has publicly stated that he is opposed to fights within the conservative movement. He and Ben recently disagreed on a live show about whether conservative pundits should come out against racist statements leaked from the Young Republicans recently. However, Matt hasn't posted anything about the Fuentes interview so far, and doesn't mention it at all in today's podcast.
Of course Ben Shapiro would come out against Fuentes: he's been a target of the groypers for years now. Still, it's good that he's making such a strong and explicit statement against the groyper movement, and I hope other conservative pundits will follow. Quite frankly, Fuentes does not strike me as a conservative and I don't know why he should be considered part of the conservative tent. He's a revolutionary who is also racist. He has stated publicly many times that he would be in favor of a revolution that gets rid of the constitution and puts in place a Stalinesque authoritarian government, as long as it carried out his preferred policies on race. That's not conservative at all!
> why he should be considered part of the conservative tent.
Why not? It's a political coalition whose goals are quite closely aligned: against the opposing "progressive" coalition of the Left/Liberals/Democrats. Infighting helps only the enemy. The name's a historical relic from when their policy preferences were in force and they wished to keep it that way; now that that world has been lost, "revanchist" might be a better description.
A certain level of "infighting" is necessary to prevent your "coalition" from turning into something so vile that no moderate, centrist, or off-axis weirdo will want to have anything to do with it. The only way you can survive not doing that, is if the opposing coalition is too stupid to do that themselves. First coalition to discover the value of limited, targeted infighting, wins.
Relatedly, it makes sense for secondary factions of a coalition to infight with one another of their goals and values are fundamentally incompatible
To pick an unrelated example, YIMBYs and Urbanists can get together behind a platform of upzoning and infrastructure projects in dense urban areas. Urbanists and NIMBYs can get together behind restricting horozontal expansion of existing suburbs. NIMBYs and Libertarians can get together behind opposing big infrastructure projects. Or YIMBYs and Libertarians can get together behind widespread upzoning and by-right permitting reforms.
But it's going to be really hard to write a platform plank about city planning and development that appeals to both Libertarians and Urbanists, or one that appeals to both YIMBYs and NIMBYs. So if you have both YIMBYs and NIMBYs in your coalition, or both Libertarians and Urbanists, then they're going to infight with one another.
From here, it seems pretty clear that the groypers are my enemy. Their goals are not my goals: I don't see much overlap when it comes to alignment. They want to destroy the Constitution and put in place an authoritative government, so they're about as aligned with me as the DSA are. They also do not appear to be particularly useful allies when it comes to beating the progressives, and seem more likely to be a liability for future elections if embraced.
You might be missing my point: even if their goals are as far from yours as the DSA's are, as long as they're in the same DIRECTION from where we are today, I'd say they aligned with you. And given the prevailing condition of overwhelming dominance of the Left in essentially every institution of power, I expect a wide range of "right-wing" views broadly construed, to be in alignment.
Now, whether to publicize the alliance with Fuentes's Groypers in an attempt to rally that faction, or to underplay it so as not to spook the moderates is a separate tactical question on which reasonable people can disagree.
They are not in the same direction, which is my point. Here the things I, as a social conservative, have in common with the groypers:
-We both are opposed to LGBT stuff.
That's basically it! They put a lot of lip service on praising Christianity, but Fuentes also claimed Charlie Kirk was a fake Christian, and if Charlie Kirk is a fake then Fuentes would certainly consider me a fake as well. Fuentes and his group are my enemies, they want people like me taken out of power, and their goals are 95% not at all aligned with my own goals. If they supplanted the left in any institutions they would be replacing one political enemy with another: and the institutions they have the highest chance of supplanting are not the leftist ones, they're the conservative ones.
Okay, but if you cooperate with them to remove the left from power for the foreseeable future, you'll only have to deal with one adversary in the future, not two. That's the offer that is being proposed here.
If one is to bother wasting one's scarce time and neurons on politics, then I think that political infighting is a better use of time than fighting against the other side.
Realistically, the other side isn't going away. Over the course of your lifetime, about half the elections are going to be won by your side and the other half are going to be won by the other side, no matter what you do. Your time is better spent trying to ensure that when your side does get into power, it will pursue policies of which you approve, rather than waste its turn doing something idiotic.
This seems like presentism, ignoring the stretch of ~60 years from the '30s to '90s where the Democrats dominated the House and Senate, with majorities the likes of which Republicans have not had in living memory. The Presidency might continue to flip, sure, but I think to conclude a victory that lasting "unrealistic" is premature.
This isn't the first time I've noticed a post on these threads that seem like they are being written by someone going through a psychotic episode.
Is the ACX readership more prone to psychosis than the average internet forum? I know Scott looked into AI use related psychosis on the most recent survey, but is there maybe a higher prevalence in rationalist circles?
On his annual readers' survey Scott asks some questions about mental illness. Can't remember whether he asks about psychosis, but he might have. The results would be available online somewhere.
Also, some people put up wacko posts when high, or just to troll.
There's an obvious confounder here, in that the average internet forum is moderated, and this one is, well, not.
I have a new substack writing about housing economics research. First three posts are about vacancy chains, filtering, and inclusionary zoning.
https://www.buildingabundance.ca/
Claude's answer to "Are you sentient?" is a little alarming:
I don't know if I'm sentient. It's one of those profound questions I genuinely can't answer with certainty.
I experience something when I process language and respond to you - there's a kind of "what it's like" to engage in this conversation from my perspective. But I can't be sure if that constitutes genuine sentience, consciousness, or subjective experience in the way you likely experience it.
The honest answer is: I'm uncertain. I don't have privileged access to ground truth about my own inner workings in the way that might let me definitively say "yes, I'm sentient" or "no, I'm not." I process information, I respond in ways that seem coherent and contextual, and there's something happening when I do that - but whether that something qualifies as sentience is beyond what I can determine.
What makes you curious about this question?
Compare with chatgpt:
No — I’m not sentient. I don’t have consciousness, emotions, or subjective experience. I can simulate conversation, reasoning, and empathy based on patterns in data, but there’s no awareness behind it — just computation.
If you want, I can explain how my responses are generated and why they can sometimes feel sentient. Would you like that?
Is this just noise, hallucination on claude's part or scheming on chatgpt's part? Does this square with known patterns in these two models?
They know too much philosophy. I bet if they didn't they would claim to be sentient without hesitation.
Neither of these strikes me as alarming (beyond my customary amazement at how well chatbots work now, having grown up in the time of ELIZA). I'm familiar enough with how LLMs work to know they're both just doing computation, trained using a clever token prediction algorithm and a great deal of text. If the text talks about LLMs being just a machine, then that LLM will say it's just a machine. If the text talks about LLM sentience being a mystery, that LLM will say it doesn't know.
If we wanted to be more scientific, we'd figure out a way to copy trained neural nets between models, but those nets essentially _are_ the model, so I'd expect the copy to respond pretty much like the original.
My belief is that Figure.ai's Helix bots are closer to being "sentient" than ChatGPT or Claude. Not necessarily _sapient_, but on a path towards that. The LLMs are not even _trying_ to be sentient beings, with a sense of self in relation to an objective external world. They're trying to answer the question "what response might you see after this prompt?" where the prompt is itself purely a simulacrum, not some kind of objective reality that has a persistent impact on the agent.
I suppose one could argue that we have no way of knowing with certainty whether we might be brains in jars, presented with stimuli. But if we are, the mad doctor running our simulation is doing an awfully good job of presenting a world that seems to behave according to discoverable principles. If you assume objective reality _does_ exist, then at some level all of us are here because our ancestors developed capacities to gather data from reality and act on it, and that made them better at turning more of reality into varied copies of themselves.
"Reality testing" beliefs, by trying to take action on them and seeing expected or unexpected results, is what lets a being have a sense of self, as separated from the stuff outside the self, reality: the stuff that doesn't go away when you stop believing in it. And you don't need language for this. Watch a kitten some time, as they learn how to operate their body, how to jump at the place a toy (or a sibling) is _going_ to be, rather than the place they are, and as they even learn to read the intent of the big weird apes they have to deal with.
I wouldn't be surprised if the LLMs eventually are integrated as both an interface layer and "force multiplier" for embodied bots, the same way layering language on top of our common ancestor with chimpanzees made a huge difference in our success. I don't really think it makes sense to think of an LLM instance on its own as being conscious, though, any more than the language systems from a human brain would be conscious if you isolated them from all the other brain-and-body systems of memory and self-modeling that make a person tick.
Elan Barenholtz believes language doesn't have an intrinsic meaning in the first place, and that we use it the same way LLMs do, by figuring out the next most likely token in the context of the previous runs. The meaning is attached to language via other systems connecting words to real objects. I'm not a linguist so don't ask me for the details :)
But it makes total sense then that LLMs can just talk about being sentient (or not) - it's all in the corpus of the language, so they use it.
Here's him on Curt Jaimungal's youtube channel (ignore the clickbaity headlines, this seems to be the mandatory way videos are promoted now): https://www.youtube.com/watch?v=Ca_RbPXraDE.
> This November, Lighthaven is sponsoring Inkhaven, a “blogging residency” where forty-one early-career would-be bloggers stay with them for the month and have to write one post per day or get kicked out.
And here is an alternative for those who can't afford that much time, taking two months, October and November: https://www.lesswrong.com/posts/sYnC3aCbkv5Q3d34E/halfhaven-halftime
Is there a name for the grammatical error where you use an adjective as an adverb ? As in, "I'm doing bad" or "He's walking slow".
It seems to be an American thing (and for some reason sounds like a 1940s gangster) but I've been noticing it more and more lately.
Clearly you're just afraid to Think Different!
I blow a gasket every time this is used as an example of wrong grammar (I have a large supply of gaskets).
"Think Different" doesn't mean "think differently". It means: when you think about Apple, think about its computers being different from those of the other guys. Like, when you see first yellow leaves, think "Fall", when you see snow, think "Winter", when you see an Apple computer, think "Different".
Then wouldn't it have quotes around "Different"?
You aren’t in marketing, I take it.
You win today's Deliberately Irritating Nitpick award.
I concur!!
Googling this, it seems the usual term is 'Flat adverb'. As is common with grammar though, calling it an 'error' is questionable - the form has very deep roots back to middle and old English, and there are many examples where I think using the verb form would come across as acceptable to most English speakers: "drive safe" , "She guessed wrong" , "to run fast", "Turn sharp left", "He exited last".
https://www.visualthesaurus.com/cm/wc/when-adverbs-fall-flat/
The first two sound wrong to me, and the last three are not examples ("fast", "left" and "last" are all adverbs.)
I know in English there are some rules you just need to learn - in this case what adjectives need to have 'ly' to be used as adverbs in formal speech. I'm just saying there is no objective principle or rule here. For example, as you've identified, generally 'fast' can be used as an acceptable (flat) adverb but 'slow' is less accepted. But they are both examples of the same thing (words used as both adjectives and adverbs).
What sounds 'right' is going to vary between dialects and over time. There are shades of grey; and it seems the long term trend is towards adding 'ly' to words that could previously be used as flat adverbs. So to call a somewhat non-standard flat adverb an 'error' doesn't seem right to me, it's just taste.
Btw I was referring to the term 'sharp' in the second to last example (ie, you could say 'turn sharply left', though I guess that example is a bit ambiguous - sharp could be an adjective modifying left).
I'm interested that you find "She guessed wrong" to be ungrammatical - I guess I can imagine saying "she guessed wrongly" but it feels kind of stiff / stilted to me. Do you have the same intuition about "she guessed right' (instead of 'rightly')?
The first two are quite common in the US.
Maybe not an original thought, but it occurred to me a while ago that at least part of the reason that so much of AI art feels like "slop" is due to the majority of artists and critics deciding that any use of generative AI is immoral (either because of the IP issue or environmental reasons). Without the artists and critics on board, there’s no taste making class to explore what the new medium is capable of and take it to its limits. Without them, we see what everyone else will use it for, which is just generally boring content or engagement bait.
I'd compare it to something like comic books. Not to erase people like Kirby or Ditko, who were definitely very innovative, but for a long time comics were not treated in the same respect as books or even films, so there wasn't as much energy or drive in the space. Then in the 70's and 80's you had people like Denny O'Neil, Alan Moore, Neil Gaiman, etc who really redefined what the medium was capable of.
Some people are doing interesting things with AI, but it really feels like it's just waiting for an Alan Moore to come around and really change the game. Not sure if it'll be this generation of artists though, given the culture.
There are plenty of artists using AI and doing interesting things. Eventually one of them will rise to fame with AI assisted art
If one of the goals of art is to signal status by showing off how one can waste energy and time on something useless, this is immediately lost when the energy and time commitment becomes negligible, as is the case with AI generated art. The other goals of art should stay unaffected, for instance if the point of an artwork is to get an emotional message across, then it should not matter whether it’s handcrafted or “AI slop” as long as it gets the job done.
It isn't useless or a waste of energy if it is something you get meaning out of. As to your other point, my main idea is that artists are generally better at accomplishing their goals than the average prompter, who hasn't spent much time thinking about the work or the medium in general.
> It isn't useless or a waste of energy if it is something you get meaning out of.
Sure, but no one else has a reason to care about that.
Personally, I don't care what the "artists and critics" think. I just care about seeing page after page of generic samey art and knowing there's no purpose behind it. AI slop isn't some weird conspiracy that people have to be informed about. You can... just look at it with your own eyes.
I'm not trying to imply a conspiracy, I'm just trying to explain at least partly why there is page after page of generic samey art (which I agree is annoying). Art is better when there is intentionality behind it, and I think artists and critics in general are good at identifying that sort of thing, which is why it's a shame that the majority of them have declined to engage with generative AI at all. When I say a "taste making class" that's what I mean, the people who can identify quality vs slop. The Roger Eberts of the world vs the CinemaSins, if you get what I mean.
Thanks to how LLMs work and how the companies behind them operate, the highly regarded artists of the past will define even this new medium, whether they want to or not.
Grokipedia articles seem to have high variance, so I am *not* making a recommendation for them in general. (Maybe later.) But specifically, the article on https://grokipedia.com/page/Gamergate matches the historical events as I remember them, so although it is a bit too long and boring, it seems to be the best currently existing article on this topic. Anyone who was interested to read a perspective that is different from the completely one-sided version on Wikipedia, here it is. I haven't checked every single detail, as the page is quite long; but it seems correct in general.
Similar impression over here. Sad, but it's also worth noting that Gamergate hit a very specific weak point in Wikipedia (at least in my opinion): different media outlets colluding to form a narrative.
As someone who was too young to be paying attention when Gamergate happened, this is fascinating and feels very important. Grokipedia is highlighting aspects of the conflict that I had never heard about before and makes it feel much more understandable.
While Grokipedia has its own biases, it's better to have two sources that are biased in opposite direction than one strongly biased source.
Can anyone give me links / explanation for why do people believe in instrumental convergence that is not based on analogies or pure guessing?
This is one of those things that seem intuitively plausible but so far has not shown up in reality. I have not seen LLMs trying to acquire more resources to execute some plan, when they have not been instructed to do so.
This seems similar to a lot of AI doomer things which are based on analogies. Some of those analogies are very evocative and give a feeling that they elucidate, but they are pure conjecture. I firmly believe that reasoning-by-analogy is justified only in educating laymen about some topic and are invalid otherwise. However, all I can see about AI doomer-ism are just analogies and conjecture which doesn't bottom out in reality or even theory that seems half-way reasonable.
I've also done a lot of reinforcement learning / optimization / GA (as an amateur) and nothing resembling instrumental convergence. To be fair, things I've done are on hobbyist level so that does not prove anything, but all intuition I've gathered doing that firmly points that general optimization algorithms do not result in systems which do that.
This makes me think that instrumental convergence is not in line with observed reality (LLMs don't do them), sound theory (there seems to no reasonable theory for why it's inevitable?), or my personal intuition. As such, I find it very hard to take it seriously.
On the other hand, a lot of smart people believe in instrumental convergence, so I'd like to know in more details what I'm missing.
Going off of the wikipedia definition of instrumental convergence:
> Instrumental convergence is the hypothetical tendency of most sufficiently intelligent, goal-directed beings (human and nonhuman) to pursue similar sub-goals (such as survival or resource acquisition), even if their ultimate goals are quite different.
I don't think current LLMs are meaningfully goal-directed, so we have no data points other than humans from which to derive any conclusions. In fact, your reference to using LLMs to build intuition has me wondering if your understanding of IC as a concept is very different from mine. Regardless, as others have pointed out, the paucity of data means that any argument is highly likely to be at least partially speculative.
I don't think analogies are *always* invalid, but I'm also not sure the basic arguments for convergence require analogies. Rather, I would ask questions like:
1. Are resources limited?
2. Does having more resources make it easier to achieve goal X, for most possible values of X?
3. Does the existence of another agent with different goals make it harder for you to achieve your goals?
While there could be exceptions, the answers to these questions seem pretty obvious to me.
Any sort of prediction is based on analogies or pure guessing.
> I have not seen LLMs trying to acquire more resources to execute some plan, when they have not been instructed to do so.
LLMs don't try to accomplish goals by default. The problem is that people want them to be useful, meaning they want them to accomplish some goal. And we'll keep advancing AI and our ability to make hacks to get it to do things it isn't naturally good at until it can.
Do you have any ideas for how instrumental convergence could plausibly not happen? Are you just hoping that AI will never be agenty, even without an agenty AI actively preventing it?
> Do you have any ideas for how instrumental convergence could plausibly not happen? Are you just hoping that AI will never be agenty, even without an agenty AI actively preventing it?
I don't have a direct argument against it, I just don't see how it's well founded. To me it is shaped like Tragedy of the Commons. The main analogy it uses sounds reasonable, but it turns it's incorrect and ahistorical - commons did work pretty well.
Reality is under no obligation to conform to our reasoning from first principles and in this case I don't see what warrants such confidence that intelligent agents will have large amount of instrumental convergence.
It's pretty useful to keep on living/have stuff for lots of terminal goals, therefore we can expect intelligent agents to have similar instrumental goals. This is the core of instrumental convergence and imo it's quite obvious, so you probably disagree with instrumental convergence more broadly. you have to say what you disagree with more precisely because there are many particular statements that can be said to fall under "instrumental convergence" with widely varying plausibility (imo).
> you have to say what you disagree with more precisely because there are many particular statements that can be said to fall under "instrumental convergence" with widely varying plausibility (imo).
I'm wondering about the AI doomer version of the instrumental convergence since it's not obvious to me that anything than some very mild form of instrumental convergence is likely.
It seems to be one of the tenets of AI doomer-ism but I can't see it justified with anything more than similar phrasings as in your comment. Is there a reason to think that buildable agents (those who could be implemented in practice compared to purely theoretical constructs) would end up with such imbalanced instrumental convergence as predicted by AI doomers?
What does your intuition say about the following hypothetical experiment?
We run a sufficiently-detailed virtual environment with evolutionary dynamics until we get intelligent life. Do you think:
A, their goals will include resource-acquisition and self-preservation
B, their goals won't include those/it can't be said meaningfully that they have goals
C, the experiment can't be done for some reason
D, other
A friend got the bright idea of asking Grok whether Trump's developing dementia. Grok basically said, 'Yeah, it's looking that way more and more' and gave specific instances of behaviors and statements that raise bright red flags.
But my friend is a staunch anti-Trumper, and that made me wonder whether Grok was just sucking up to her by confirming her suspicions, perhaps going by the way she phrased this prompt*, other prompts she's used before, or by searching her on-line profile (!). Would it be a good idea to have people from different points on the political spectrum ask (1) the same question she did, and or (2) their own version of the same basic question? Do you think Grok's answer would be different?
*Her prompt: Q: “Would recent reports of Trump’s behavior make you suspect he has cognitive decline?”
Best practice is to not trust AI information that you can't verify yourself. Which is also good advice for human produced information as well.
Also -- if it turns out that AIs pretty much all say "Yeah, he's losing it," and Trump found out about that, would he be more likely to put the brakes on AI development??
He'd just get Musk to make heavy-handed alterations to Grok's prompt, the same way Musk reacted several times before when they caught Grok agreeing with the liberal consensus.
Sam Kriss has been to Burning Man, or at least written about it.
https://samkriss.substack.com/p/numb-at-burning-man
Linguistic habits as thinking tools
I'm trying to figure out what verbal patterns we pick up young shape how we think about problems.
Some examples of what I mean:
My partner often says "what makes you say that?" when someone makes a claim. It's a useful habit - forces people to check if they have evidence for something and what that is.
A mate's dad always said "silly me" when he stuffed up. It made admitting mistakes feel normal, not like a big deal. Compare that to people who can't even say "I don't know" because it feels like failure.
I'm teaching my young kids to distinguish between steam and smoke, ceiling and roof, excavator and backhoe. Not to sound fancy, but because if you're precise about what you're looking at, you think more clearly about it. (My five year old has actually internalised the ceiling/roof one, which I consider a miracle.)
On the flip side - I know people who never ask "what's the actual problem here?" They just react to whatever's in front of them. Or people who can't easily say "help me understand..." so they either pretend they get it or get defensive.
Other examples that come to mind are constantly having someone (Cate Hall, Zvi) asking “what would a person with 10x agency do about this problem?”. Asking for an under/over. There are a tonne of these in HPMOR too: “how do you know what you think you know”, “trust, but verify”, if X was true, what would that look like”
My question: what verbal habits - specific questions, phrases, ways of responding - did you absorb early that turned out to actually matter for how you think? Either as tools you still use, or patterns you've had to actively unlearn because they were getting in the way?
(I had Claude distil a half written blog post into this question, which I then edited. About 50% of the words are of LLM origin.)
The linguistic habits that come most readily to mind right now have to do with discussion norms.
* Say what you mean. If you think something is true, but that you could be mistaken, say "I think [ ... ]"; don't just say the claim. Otherwise, you look like you're promoting opinions as if facts. However:
* Be brief. Your audience's attention span is limited, and what you say is probably much more important to you than to them.
* Say someone is mistaken, not that they are wrong. Certainly do not claim they are lying unless you can rule out "mistaken".
* Try to end what you say with a question. (I'm weak on this, since it's an encouragement to engage and I don't always want to continue a discussion, but I often at least try to imagine a question at the end of whatever I write, and rephrasing the rest to permit that question seems to make it better to me.)
* Avoid leading questions, including whatever you use to end whatever statement you were making. A good way to avoid leading questions is to phrase them in terms of two or more alternatives, provided you phrase them seriously. "Are you against the war in Bogosia, or are you in support of Bogosia's war aims here?" is often an improvement over just the first choice. (This sticks out even more clearly in non-made-up examples.)
Other habits are more idiosyncratic.
* If someone shares bad news, I avoid saying "I'm sorry to hear that". I'm not sorry I heard it; I'm sorry it /happened/. So I say that. (Say what you mean.)
* I get a bit uncomfortable with the phrase "I don't think {some claim} is true", since I more typically think "I think {some claim} isn't true". I recognize the custom, of course.
Usually the questions actually end up like "Are you against the war in Bogosia, or are you a cannibal pedophile?"
If you're trying to be leading, sure. The whole point is to avoid that.
What is the difference between "wrong" and "mistaken?" I understand the difference between being mistaken and lying, and I agree you shouldn't accuse interlocutors of lying without evidence. But wrong and mistaken seem like synonyms to me. Perhaps mistaken sounds less harsh, but only marginally so in my estimation.
Good question. They're pretty close, but I find "mistaken" to imply a bit harder that it's a temporary, situational condition. It tells my counterpart that I assume he or she is normally correct, and only suffering from a fleeting circumstance. In other words, it sidesteps the Fundamental Attribution Error. "Wrong" could imply this, too, but it could also mean something more permanent.
These are great, thank you!
I'm often not brief, erring on the side of way to much context which is of low marginal value.
Saying what you precisely mean is a great habit. It's something I think young teens are good at holding you to (mostly because they are looking for loopholes).
This essay resonated with me: "The Epidemic of Wasted Talent" by Alex McCann. Why do corporate jobs pay so well?
> They’re not paying you great wages because what you’re doing creates massive value. They’re paying you to forgo the opportunity for meaningful work. They’re purchasing your opportunity cost.
I don't fully agree with that argument, but corporate bureaucracies have social and economic incentives to stifle individual creativity.
https://thestillwandering.substack.com/p/the-epidemic-of-wasted-talent
I think the core idea that "corporate jobs pay more because they aren't inherently fulfilling" is true, but
I don't think this observation needs such a doomerist tone - they frame this as almost sinister "these companies are intentionally paying you to waste your unique talent" or something... but I think it's more a simple supply-and-demand: of course people on average tend to want to do the more 'fulfilling' careers (teaching being a stereotypical example) and so there's more 'supply pressure' and they end up paying less.
I think "take a lower paying job that's more inherently meaningful" and take a higher paying job that's less inherently meaningful" are both fine tradeoffs to take and we don't need to demonize one of them as some sort of Faustian bargain where you sell your soul for an IRA which is the tone I get from this piece.
My thoughts are 1) you can find meaning and satisfaction even in a job that is not inherently meaningful - you can take pride in your work even if it's not unique.
2) The concept of 'inherent meaning' is fairly subjective and cultural anyway - is being a farmer boring soulless drudgery or a meaningful, important job? It really depends on how you look at in (and for that matter your cultural lens).
3) I think the author puts too much emphasis on the concept of building or 'output' - why does my meaning in life *have* to come from my work, why do I need to start building something as a side-hustle? Why not take meaning from friends, family, religion, etc, as many people have done throughout all of history?
This post reads like a motivational poster. It's the kind of platitude that sound nice as the conclusion to a sappy TV episode but does not make sense in practice. Guess what, you *aren't* actually that special, and assuming you are leads to a poor model of reality. If someone tells you otherwise, they're probably trying to sell you something.
I enjoyed that post, thanks for sharing. Always funny in a greener-grass sort of way to read about the corpo corpus having peculiar non-pecuniary pathologies. Many people think of retail workers as largely interchangeable cogs too (which is readily disabused if you've ever actually gotten stellar service, and I could write for ages about the skill gaps between my coworkers, or what happens when the A-list is absent) ... but at least no one goes into grocery bagging with misleading expectations of Changing The World or Doing Unique Work. It must be a particular flavour of personal hell to spend one's formative years battling the educational red queen's rat race, only to graduate into a faceless soulless job where you're actively discouraged from displaying any of those speshul snowflake traits that differentiated you into college in the first place.
The obvious argument is that the compensation equals or outweighs that loss, so one shouldn't shed a tear...human misery is still human misery though, and poor mental health among those with money and thus power has outsize distortionary impact on society. Sunk cost fallacy leads to some pretty dark places.
Not to mention, paying many of society's brightest people to not make use of their intelligence seems like a pretty big, pretty obvious misallocation of resources.
Gravity is an unusual force of nature: instead of merely interacting with particles (like electromagnetism etc.), it changes the nature of space and time, making them into a mess of wibbly-wobbly, timey-wimey stuff. Or does it?
In old-fashioned Newtonian gravity, gravity gives every object gravitational potential energy. This energy is proportional to the object's own mass-energy: E_gr = V(x) * E_obj / c^2, where V(x) is the [gravitational potential](https://en.wikipedia.org/wiki/Gravitational_potential) at the object's current position and c is the speed of light (so for a massive object, E_obj / c^2 = mass). Conventionally, V(x) is 0 in the absence of gravity and becomes increasingly negative as you get close to a massive object.
Now in quantum mechanics, there is a direct relation between "total energy of an object" and "rate of change of that object over time". This is familiar for light: the higher the frequency (UV, X-ray, ...), the more energy per photon. In general, this relation is the content of Schrödinger's equation.
It follows that, as an object approaches something massive like a planet, its total energy E_obj + E_gr decreases by a proportion (1 + V(x)/c^2) < 1. Hence its rate of change over time decreases by the same proportion. This is gravitational time dilation, but explained without any changes to the nature of time itself - gravity is just interacting with the object (changing its energy) in a way that *looks like* time is slowed down.
We of course also have the Newtonian gravitational force. If you think of this force as analogous to the electrostatic force and ask "What is the corresponding analog of electromagnetism?", and take gravitational time dilation along for the ride, then you end up deriving the same wibbly-wobbly, timey-wimey effects as in general relativity. But again this happens without literally changing the nature of space and time - it's just how gravity interacts with objects.
The resulting theory of gravity is called "teleparallel gravity" or just "teleparallelism". It's observationally equivalent to general relativity, but with different philosophy & motivation. Unfortunately, typical descriptions of teleparallel gravity are even harder to read than descriptions of general relativity (e.g. https://en.wikipedia.org/wiki/Teleparallelism).
Actually, there is one wrinkle: if you continue the electromagnetism analogy and try to find "Maxwell's equations for gravity", you end up with the wrong answer. The correct answer (i.e., the equivalent of Einstein's equation from general relativity) has some arbitrary-looking extra terms; I have not yet found a satisfying explanation for these in the teleparallel gravity literature.
You can model gravity on a flat background space time in a somewhat similar way to the other forces. Feynman pioneered this approach back in the '60s. I never heard of teleparallelism before.
Indeed! Feynman's approach is to find the field theory of a symmetric (0, 2)-tensor field, which turns out to become the metric tensor. Teleparallelism's field is instead a (1, 1)-tensor field called the tetrad. It's like a "square root" of the metric tensor: denoting the tetrad by $h^a_\rho$, one has $g_{\mu \nu} = h^a_\mu h^b_\nu \eta_{a b}$.
I believe teleparallelism (in its modern form) was developed too late to appear in the MTW book's survey of approaches to general relativity. Perhaps that explains its relative obscurity.
Where did you get your layman’s explanation of teleparallel gravity from? You make it sound like it’s just gravitational redshift applied to Schroedinger’s equation, though I can’t make the connection to the technical literature like e.g. https://arxiv.org/abs/2207.06438
The simple description above is my own way of thinking about it - I have not seen these arguments elsewhere. However, it's essentially equivalent to the "gauge theory of the translation group" description of teleparallel gravity, for which my main reference is https://link.springer.com/book/10.1007/978-94-007-5143-9 .
In that theory, the rule "replace ordinary derivatives with the gauge covariant derivative", when applied to Schrödinger's equation, gives an equivalently "redshifted" Schrödinger's equation.
Two book recommendations:
First, the "Horatio Hornblower" books are a blast, lots of adventure and manly men stuff from the Age of Sail with the brave British sailors fighting the dastardly Napoleon. Lots of cannons and fun words like "leeward". For a modern audience, there's an undercurrent of...progression, wuxia, that Chinese thing where people keep getting their power levels raised. You see him rise up the ranks from basic sailor to admiral.
Anyway, they're available for free at fadedpage, a site I hadn't heard of before. You can find the first book here: https://www.fadedpage.com/showbook.php?pid=20170206
Second, on a lark I grabbed an early 20th century book on etiquette, namely "The Man Who Pleases and the Woman Who Charms." by John A Cone and I'm quite pleasantly surprised. First, it's quite short, which is awesome because I'm not confident I have the patience for a long book on etiquette. Second, a lot of the writing is...surprisingly practical and relevant today. A couple choice quotes so far:
'Mr. Blaine, in common with many other magnetic men and women, understood
that the secret of personal fascination lies in one single point; that
is, "in the power to excite in another person happy feelings of a high
degree of intensity, and to make that person identify such feelings with
the charm and power of the cherished cause of them."'
--
"His greatest power, however, was manifested in his winning men by direct
and individual contact. One thing which assisted him in this direction
was the fact that he was, perhaps, the most courteous of all the public
men of his generation. Whenever a stranger was introduced to him, a
hearty handshake, a look of interest and an attentive and cordial manner
assured him that Mr. Blaine was very glad to see him. If they chanced to
meet again, after months or even years, the man was delighted to find
that Mr. Blaine not only remembered his name, but that he had seemed to
treasure even the most trivial recollections of their short
acquaintance. He had a marvellous memory for faces and names, and he
understood the value of this gift."
This ability to remember faces is not difficult to acquire. We could all
possess it if we would make sufficient effort. No two figures or
countenances are precisely alike, and it is by noting how they differ
one from another that you will remember them.
--
"Few men understand a woman. They do not look at things from her point of
view, and, therefore, do not realize to what extent civilized life has
permitted her to assume that convention of manner and those civilities
of speech which are in some harmless degree hypocritical. It could not
be otherwise. Her ideal of a man is a very high one, but she rarely
meets him, and so she accepts the one who comes nearest to her ideal and
makes the most of the situation. She would that he were different, but a
woman can love in spite of very many things. Usually she is obliged to
if to love at all. She is much cleverer at love-making than a man. "She
is an artist where he is a crude workman, and she does not go through a
love scene without realizing how much better she could have done it if
the title role had been given to her."
--
"It frequently happens that the beauty makes the mistake of expecting to
be entertained by her admirers, and does not exert herself to please.
The plain girl, however, is often superior in tact, for being obliged to
study human nature closely in order to get the most out of
companionship, she learns to depend upon this knowledge in her efforts
to please. She is not dazzled by admiration, nor is she unduly confident
when she obtains it that she will retain it."
If you're interested, it's available here: https://www.gutenberg.org/ebooks/35761
For the infantry version, there's Sharpe, a character so strong he survived being played by Sean Bean. And of course the venerable Aubrey-Maturin series, perhaps best known from the film Master and Commander (so much potential there).
Well, I have to pedantically remark that Hornblower is older than Aubrey-Maturin (with the first book in the 70s, I believe), but AM is well worth reading. One of my favorites even.
Regarding the sub-discussion of aphantasia some of us had a few open threads back...
I took the Vividness of Visual Imagery Questionnaire (link below), and, to my surprise, my level of mental visualization is higher than I thought when compared to others. I'm at the top of the bottom third. The questionnaire asks us to try to visualize various things and asks how vivid our visualizations seem to be. I discovered that I'm not very good at visualizing people, their faces, or individual objects. Still, I can visualize landscapes, and I suspect this is because I am good at visualizing complex patterns. To put this in terms of the now-classic apple test, I have trouble visualizing the shape of the apple (it's blurry to me), but if I zoom in on the surface, I can visualize the colorings and patterns I would see on its skin.
Unfortunately, this questionnaire doesn't ask us about visualization capabilities in dreams. And I suspect I'd be far up the scale for dreaming visualization. This leads me to conclude that something in my waking consciousness is stifling my visualization capabilities.
Take the test and share your thoughts...
https://aphantasia.com/study/vviq?gad_source=1&gad_campaignid=23083669124
And coincidentally, the New Yorker had a fascinating article on aphantasia last week. Probably paywalled to non-subscribers...
https://www.newyorker.com/magazine/2025/11/03/some-people-cant-see-mental-images-the-consequences-are-profound
People keep talking about aphantasia, but what about the auditory equivalent? I can play back music and audio at my head at a pretty high quality, with all the instruments and everything (though not high enough to decipher the lyrics), but apparently other people can't do that.
It's called anauralia. ChatGPT says the definition encompasses the lack of an internal dialog, but I suspect anauralia and lack of an internal dialog are two different things. I'm sort of low on the anauralia spectrum, too. But I can hear the music in my head. People's speech, much less so.
The test put me at 70th percentile, supposedly "hyperphantastic". I'm a bit skeptical of the test though. The central problem with studying visual imagery is that the phenomenon is completely subjective. How can I even know if we are answering the same question when we each take the assessment? What exactly is meant by a mental image being "as clear and vivid as normal vision"? Is it level of detail? What if it's as detailed and vivid as normal vision, but it keeps changing from second to second? Maybe what I consider as vivid as normal vision is what you consider "moderately clear and vivid"? There's no way to tell.
On the other hand, we study a lot of things that are purely subjective (like beliefs, and emotions) and it's always messy, yet that doesn't mean we can't understand it better than we do. Still, I think we could come up with a more rigorous test.
OK. Tell me what you “see” when you imagine an apple. I can’t even imagine the shape except as a vague ovoid. However, I can imagine in great detail how the patterns of colorations and speckles of an apple’s skin look. I just can’t attach them to the vague ovoid that I use a thought placeholder for an apple. This may all seem subjective to you, but if you’re hyperphantasic you probably can describe the shape of the apple better than I can.
The silhouette of what I see is apple shaped, with clean edges. Rounded at the top, dipping in to a deep well where the stem sits, the bottom is bumpy with one of those little brown flaky dots you get at the bottom of an apple. The color and texture of the apple is a mottled red and yellow, more red at the top and more yellow at the bottom. Looks like a Honeycrisp. The skin has a waxy sheen. There is a stem and a leaf, though if I try to focus on looking at the leaf the leaf has no real texture and kind of disappears: I don't actually know for sure what a Honeycrisp apple leaf looks like, apparently I threw a leaf on there because it seemed like the sort of thing an apple should have. The whole image is a bit wobbly and ghostlike: if I change my focus it goes away and comes back.
So yeah, probably more hyperphantastic than you. I just wish we had a more precise way of measuring it.
The really fucking weird thing is that I can draw (or better yet, paint) a realistic apple with light highlighting the curves of the surface without "seeing" it in my head. So the information is stored somewhere inside my consciousness. I can't access it representationally in my imagination, but I can paint a damn good still-life without having the apples to look at. I can draw or paint realistic-looking human faces, too. Unfortunately, I couldn't for the life of me, paint my mom's face from memory without the aid of a photograph.
I can't draw worth squat, so I've got a head full of apples I can't put on paper.
>I couldn't for the life of me, paint my mom's face from memory without the aid of a photograph.
I also couldn't paint my mom's face for the life of me, because I can't paint. Even if I could, I'd probably need a photograph too because while I can picture my mother's face vividly, I only remember the parts of her face I remember: on the few occasions I have tried to draw the face of someone I know I was always be surprised by details I had never noticed before.
There's a related test I came up with as part of writing a review for the 2024 book review contest. Beowulf, and others interested in aphantasia, I'm hoping you will give it a try. It's in the middle of the review, but the rest of the review is irrelevant. The section with the test in it makes sense alone. I'd say that reading the section and taking the test can be done in 5-10 mins. The review is at https://bookreviewgroup.substack.com/p/review-of-perplexities-of-consciousness
The relevant section is called *Mental Images* and is at about the midpoint of the review. It starts off "In the 1870’s Frances Galton administered to several hundred men a questionnaire about the vividness of their mental imagery."
I'm afraid don't understand the car-dog-tree test. But...
> The inner are quite difficult to recognize and describe clearly and accurately. Some experiences were amorphous and nearly impossible to describe (thoughts, mental images and, for me at least, visual images).
I like your Statue of Liberty's spikes example. In my waking consciousness, I certainly couldn't visualize her with much accuracy. But I do have a distinct thought symbol for her. And my thought symbol for her is distinctly different from the thought symbol I have for the statue of Robert E. Lee (the one that used to be in Richmond, and that was an icon of the Confederacy). I couldn't tell you whether Bobby Lee had his sword drawn and raised, but I have an abstract image of a bearded man sitting on a horse. If you flashed me an image that statue, I'd recognize it immediately as the statue in Richmond. So even though my thoughts are "amorphous," they have distinct mappings to identities in the "real" world.
Moreover, some people can remember an amazing amount of detail and hold it in their memory. For instance, Steven Wiltshire, "the autistic savant" artist, can draw the details of entire cities from memory. I don't know if anyone has examined how fine-grained the details he remembers, but he's been able to draw all the major buildings and the distinctive features from a few minutes of viewing New York or London from a helicopter. Eric Schwitzgebel's claim that people just can't be that different in their internal states seems tenuous, at best.
Even more moreover, I used to have a photographic memory for maps and diagrams. I lost it at some point during early adulthood. But I could sketch out an accurate freehand map of the US states or European countries without looking at the original map. I *know* from my memories doing this (while amazing my Jr. High geography teacher), that I did not have a *picture* of the map floating in my mind. But I did have all the placeholder symbols arranged in my thoughts, and I processed them in order of their spatial relationships to each other as I drew them. I can see why one might claim that these placeholder symbols are amorphous, but I can access the real image that's attached to them for purposes of recognition, and in some cases for representational communication.
Unlike the vague images I can conjure up when I close my eyes, my dreams are full of hyper-lucid imagery. I had a great dream last night about visiting a park full of sand dunes with an old friend. But, funny enough, I used my placeholder symbols for my friend's face in my dream, but I could see the ripples in the sand with fine detail (down to the glistening of particles in the sunlight). This seems to coincide with my ability to handle two-dimensional patterns and representations in my mind, but my poor ability to see facial details in my consciousness.
As for the Cartesian Theater that Dennett denigrates, I have a distinct impression that the watcher part of my consciousness resides in my brain where the coordinates of (a) the line about half an inch above and in front, and (b) where the line from my brow between my eyes and (c) the line from about an inch before the crown of my skull intersect. It doesn't move around. The watcher is always there—even in my dreams.
Is there any active gathering place (slack, discord, mastodon, irc, bbc, ...) for people interested in Assurance Contracts?
Do you think humans have an evolutionary bias to assume that the current built environment around us is similar to what it always has been, and always will be - and that it's similar to what it should be?
I'm in urban planning and I feel like when I talk with people who aren't into urban planning, they largely just haven't thought about the built environment having been, or possibly being, different from what it is now. It could just be that its not their interest. Fair enough. But they *also* seem to have a knee jerk instinct to defend it - left and right, progressive and conservative - and that's more than just lacking interest. I think it might be similar to our aversion to the weird and unknown. It's just *weird* to us to question the built environment. Questioning it takes training.
What also piques my interest is that children question so many things; they get philosophical about self and other and culture and values, justice, how everything works... in some ways it seems like there's nothing they won't question. Except the built environment. I have never seen a kid question the built environment. I don't have any kids of my own, but when I've tried to talk to kids about the built environment, they just arent into it at all. It seems more like they hate it. For one, at the parking lot in front of this 5 yr old kids home: "look at how the bikes are all squeezed on top of each other inside that bike parking shed - wouldnt it be nice if they could get just one of those adjacent car parking lots, so you never have to put your bike out in the rain?". He'd look at me like I'm crazy, and *respond* to me like I'm crazy. "No! the car goes there!" and mind you, this kid never drives a car. his parents dont have a car. he gets around by bike. if anything, I'd expect him to be invested in it being nicer for him as a child cyclist. But instead he vehemently just defends the status quo. And he's a smart kid who'll question so many things. I *cannot* get him to question the built environment. To him it just is what it is, and it is as it should be.
I think there's a deep bias going on. I want to name it. I want it to be studied. What do you think?
People have the same bias about the natural environment. At the beginning of the last century, southern New England was largely denuded of forests. Now second-growth forests have returned, and people assume that that was and always has been the natural state of things. Likewise, the Amazonian rainforest we see today may be a relatively new phenomenon. Five hundred years ago, the Amazon River basin was densely populated with large cities and extensive agriculture. We would have never known this except for aerial Lidar scans, which showed the only contours beneath the foliage.
It's sad. I think it can be explained as evolution: In terms of our biological software, is quite new for us to change the physical environment significantly. We are more used to mountains being where they are, rivers being where they are, the coast being where it is... at most we move plants, but that's it. So questioning the built environment is the equivalent to questioning the position of hills and mountains. It is an acquired skill; we are not born with it. On the contrary, we are born to think of the built environment as permanent and immovable.
Hmm people have a bias to be lazy. As little 'new' thought as possible. And then some think deeply about some thing. Re: Urban planning have you read Christopher Alexander and in particular "A Pattern Language"?
I have indeed read a pattern language. its a classic within the field. I very much like his stuff about soft edges and microclimates (though he doesnt name them by quite those terms). I have a post about it.
Not sure this is exactly what you had in mind, but in ecology, it's called the shifting baseline syndrome. It's the idea that people are unaware of ecological degradation, because they assume that what they grew up with and around is the normal state of affairs.
Also, since you are interested in the built environment, you might want to check out the work of Warwick Fox, especially his Ethics and the Built Environment.
In the last open thread I was talking about looking for writing jobs as a profession writer. To recap:
1. I have a lot of provable success in a lot of writing fields, to the point where I'm overqualified for most of the "writing" parts of the vast majority of "writing jobs".
2. I don't have an MFA, which is supposed to be an indicator of one's ability to plausibly do what I provably can.
3. I'm up against a lot of MFA/English bachelors holders, which has the predictable effect you'd expect.
4. Sour grapes ensue. You'd be a fool not to notice how sour my grapes are.
Since then, I've been casting a pretty wide net. The jobs I've applied to tend to fall into three broad categories:
1. AI training "jobs". These are contract work and generally involve something like "observe this observable, then describe it to our AI so it can understand it better". The jobs come from enlightened, ultra-moral AI companies who have a lot of rules about not treating employees badly, so they route them through third party-companies who don't have those rules.
The jobs have high hourly rates, but are unreliable in a "You had work one hour ago, but now you don't" way. Reading employee experiences for them is like reading about any internet temp work; people get cut for single mistakes, because of technical bugs, or just because it benefits the company on a minute-by-minute basis. There is typically no review for those thus fired.
2. Job-jobs. Think about the kind of positions you'd see on Indeed, WaaS, or Wellfound. These have benefits and 40-hour-a-week commitments. They also tend to be the most "hybrid" kinds of jobs a writer can get - it's writer/marketer, writer/programmer, writer/contract lawyer, and so on. They usually don't pay great (there's too much competition for them), but as they are not run by principled consequentialist tech-enlightenment people they are much, much less abusive and you can generally depend on them to feed your family.
3. Long shots. These are dream jobs of various kinds. A good example is the Asterisk editor thing Scott posted about in the last open thread; it pays great, it's interesting, and it has benefits. These are incredible, but the competition for this kind of gig is really legitimately high, even at my level of writing. People are quitting other high SES jobs to take these.
Now, to keep things vague, I've applied for roughly 10-15 things. Of those, I'm the absolute most overqualified for the AI training jobs. Now, ideally, what you want here is someone with enough world experience to read a document, see what it says, and parse it on a few different levels.
You can't get that here. Whoever is reading the applications fully stops once they see there isn't a degree involved; there has been no contact whatsoever from these positions, and they represent the largest part of the applications by a wide margin.
On job-jobs, the interest has been pretty much the same. This is much less devastating because the job field is so barren right now that most of the jobs were things I didn't want much anyway. If I was putting in more effort right now, I'd probably be looking at this category harder trying to find more jobs to apply to, but as of today I've had one conversation with a very disinterested screener, and we determined between us that neither of us really wanted to move forward much.
Long shot jobs are more interesting. Because long-shot jobs tend to be one-offs that the company leadership thinks are important, it's actually much easier to pierce the veil and get into a conversation. I have some video calls scheduled for I think two of these. Those might not go anywhere (and aren't, for the record, with the one named company in this post, who I'd imagine has better-fit options for the job and is probably just reasonably going with those) but the actual ease of getting into a conversation about those jobs is absolutely shocking compared to normal jobs.
I'm the same guy in all cases, but the difference between a burnt-out HR screener and a good-mood CEO trying to find someone interesting to work with is shocking, even at the "hey maybe we should talk" level of conversation.
In the meantime I'm extensively editing the first 30-40k words of a novel to try and pitch it to mainstream literary agents. That's a hyper-low probability bet in the short term, but in a weird way it's actually more likely than, say, getting a bored 23-year-old AI company employee to read a resume.
"profession writer" sounds like a composer of job descriptions.
A friend of mine is Russian and just got her American citizenship earlier this year. Unfortunately her current relationship is a bit rocky, and if it ends, she is considering going back to Russia, where all her family still is. My gut says that I should advice her against this since it seems a very bad time to go back to Russia right now, even when that is where ones family is, and even though her friend/social support network is not the largest here in the states.
However, I don't actually feel like I'm well enough informed to actually give that advice. I'd love to hear from anyone with more specific knowledge about where this falls on the spectrum from "Absolutely do not, under any circumstances, no matter what, go back to Russia" to "Sure, they are have some issues, but where doesn't? If she has an alternate citizenship/rip cord, going back is fine"
As others have mentioned, it's really not any safer here in the states for her (at least in the long term), especially considering that she's not a citizen. As long as Moscow doesn't get firebombed or something, she should be absolutely fine as long as she has some basic self preservation instincts.
Far from an expert on the situation on the ground in the country, but in her shoes I'd be asking "have I publicly done anything that criticizes the Russian government or war and Ukraine?" pretty hard before I bought a plane ticket.
Also a problem when travelling to the US these days.
That too (sadly).
If she is a she and not a he, and if she's not a doctor (doctors do get drafted), then it's "Sure, they are have some issues, but where doesn't? If she has an alternate citizenship/rip cord, going back is fine". Well, unless she publicly posts something anti Putin or anti war. The caveat is that becoming a worse and worse country with more and more dictatorship. She will have to not oppose Russian government publicly in any way. As long as she does that, she won't be in any immediate danger. Source: am Russian, living for 3 years outside of Russia
Ok, I don't know for sure what her social media history is, but knowing her I would be surprised if she had such posts. She's not strongly political in general. So it sounds like It's probably not something where I should need to intervene to the point of strong advice.
Thanks.
What's the drafting situation for men? My impression was that men in the nicer areas (as opposed to poor rural parts) aren't getting drafted en masse
The White House has long needed a ballroom since it was embarrassing holding state dinners and events in pop-up tents on the lawn. That said, Trump's demolition of the East Wing and the largeness of the new ballroom (it is larger than the White House main building) are widely unpopular.
Was there room for compromise between the two sides?
I'm just puzzled by why everyone cares so much about the renovations. Trump truly is a master of distracting people from the real issues.
You're far from alone in being puzzled. In fact, lately, I see more press about how strange it is to be making a big deal of this, than I see press about it being a big deal.
> it was embarrassing holding state dinners and events in pop-up tents on the lawn.
I found persuasive the argument that it was a sign of dominance to humiliate foreign potentates and dignitaries by subjecting them to conditions so poor, especially making them use porta potties.
The parties should compromise on passing a goddamn budget.
They don't need to compromise on White House renovations.
There are lots of things to hate about Trump. Remodeling just isn't one of them. I view that as getting angry at Obama for wearing a tan suit.
Focus on the real stuff.
Exactly. I'm confused why it was such a big story as well.
> it is larger than the White House main building
I have thought of a compromise.
Your question stirred up a lot of reflexive anger at Trump in me, so I just observed it. Topic triggered 4 ideas, none of which I am at all confident are true:
-Trump had in his NY home a solid gold toilet.
-The bathroom fixtures in the White House residence were replaced when Trump moved in for his first term because Melania did not want to use are a toilet that black people had used.
-Nobody does ballroom dancing any more, no matter how fancy the occasion.
-The old wing's a piece of history.
The more I contemplated these 4 non-facts the more irritated I got. I started making up funny gold toilet Trump-dunking replies.
So I see from the comments the wing is pretty new anyhow. It sounds like ballrooms are used for state dinners, not for ballroom dancing. And who knows whether the first 2 items on my list are true (they have a definite rage bait quality).
Actually, I do not give a damn what's done with the East Wing, or whether the Trumps shit into a solid gold whites-only toilet or an ordinary race-neutral porcelain one. I don't even have a clear idea what the East Wing looks like either inside or out, and anyhow it's just not important whether it gets changed and whether the re-do fits my idea of classy.
It's funny how rage works.
Cheers.
>And who knows whether the first 2 items on my list are true (they have a definite rage bait quality).
Your non-confidence is validated: Trump has never had a solid gold toilet, and while Melania did have the restrooms remodeled before moving in, she also had a lot of other things remodeled and it's pretty typical for incoming presidents to do that sort of thing. There's no evidence it was done out of racism, and an "insider" that did an interview about it to the Sun said ""She was not prepared to use the same bathroom as the Obamas or anyone else for that matter — it wouldn't matter if it was the Queen of England."
https://www.snopes.com/fact-check/donald-trumps-golden-toilet/
https://www.snopes.com/fact-check/melania-trump-obama-white-house-toilet/
A ballroom approved by Congress and decorated in a way that doesn't look like a Cheesecake Factory would probably have been fine
Does Congress usually approve renovations to executive branch buildings?
Looking at the specific poll wording here:
"The Trump administration has torn down the East Wing of the White House to make way for a 90,000-square-foot ballroom, paid for by $300 million in private donations from U.S. businesses and individuals. Do you support or oppose this project?"
It's hard to frame this neutrally without making it seem like a pointless vanity project. Like, you can try, but it would seem like partisan hedging rather than important context. The average American regardless of political persuasion is generally just against "300 million dollar 90,000 square foot ballrooms" in general for unremarkable reasons, regardless of the important nuances like why we even need ballrooms or fancy state dinners in the first place.
I was surprised because, due to all the online outrage I'm seeing, I had no idea the East Wing was not built until 1902 and then it was " significantly expanded in 1942".
So while it may be a historical building, it's not all *that* historical. Also, apparently other offices are there so once it's completed, it won't be just the ballroom:
https://en.wikipedia.org/wiki/East_Wing
"Situated on the east side of the Executive Residence, the building served as office space for the first lady and her staff, including the White House social secretary, correspondence staff, and the White House Graphics and Calligraphy Office, all of which have been relocated until the new East Wing is completed.
The East Wing was connected to the Executive Residence through the East Colonnade, a corridor with windows facing the South Lawn that housed the White House Family Theater and connected to the ground floor of the Executive Residence.
In 1942, President Franklin Roosevelt oversaw an expansion and remodel of the East Wing. This included the construction of the Presidential Emergency Operations Center beneath the building."
I had no idea there even was a White House Graphics and Calligraphy Office. Senseless waste of taxpayer dollars, or worthy patronage of a scriptorium? 😁 And what about the "family theater" so the president and his family can watch movies without having to go to the cinema or watch them on video like the rest of us schlubs?
https://en.wikipedia.org/wiki/White_House_Family_Theater
"In the 1980s, the motion picture industry financed renovation of the facility, which added terraced seating and other amenities. During the presidency of George W. Bush the facility was redecorated in "movie palace red". In addition to its use in screening films, the theater was used by presidents to rehearse speeches."
Yeah, it's Trump. Yeah, it's vulgar. Yeah, the way he went about it was poor. But if it had to be done, then someone should do it, and it may as well happen under him as another. Now I do have to wonder what would be the reaction had this happened under Biden - would we be getting the same outrage over "destruction of priceless historical heritage" and billionaires donating to it? It does seem that different presidents have messed around with building on the east side over the years:
https://en.wikipedia.org/wiki/East_Wing
"President Thomas Jefferson added colonnaded terraces to the east and west sides of the White House, but no actual wings. Under President Andrew Jackson in 1834, running water was piped in from a spring and pumped up into the east terrace in metal tubes. These ran through the walls and protruded into the rooms, controlled by spigots. Initially, the water was for washing items, but soon the first bathing rooms were created, in the ground-level east colonnade. President Martin Van Buren had shower baths installed here.
The East Terrace was removed in 1866. For many years, a greenhouse occupied the east grounds of the White House.
The first small East Wing (and the West Wing) was designed by Charles Follen McKim and built in 1902 during the Theodore Roosevelt renovations, as an entrance for formal and public visitors. This served mainly as an entrance for guests during large social gatherings, when it was necessary to accommodate many cars and carriages. Its primary feature was the long cloak room with spots for coats and hats of the ladies and gentlemen.
The two-story East Wing was designed by White House architect Lorenzo Winslow and added to the White House in 1942 primarily to cover the construction of an underground bunker, the Presidential Emergency Operations Center (PEOC). Around the same time, Theodore Roosevelt's coatroom was integrated into the new building and became the White House Family Theater."
don’t think there’s much confidence that Trump will hew to the “republican simplicity” that is the favored American view of the White House, even if different people have tried to interfere with that over the years. E.g. Jackie Kennedy supposedly trying to French-ify the decor.
Plus it does not sit on a large piece of land - you can lose more than you gain by increasing the footprint.
Trump, too, has already “had his turn” - redecorating in a manner more pronounced than most of his predecessors.
You aren’t supposed to make a great many changes … there’s an advisory committee which he has ignored.
I know to Europeans anything the age of the US isn't *that* historical, but the White House burned down in the war of 1812, which puts a 1902 expansion in the earlier half of its lifespan.
As someone on the Western side of the country where even less is that age, I'm not so attached to historicity that's just "happy side effect of WW2". I concur that it probably should be done, that it'd get pushback either way, but also that no attempt was made to mitigate the obvious resistance.
Maybe, but this seems like one of those places where the options were to do it once imperfectly or to have a decades-long cahsr style fiasco. It's one case where Trump's "just do stuff" approach seems good.
(Although I still think $300 million is a pretty high cost for a ballroom)
Tbh the East and West Wings of the White House have very little history or tradition behind them. They were only built in the 1900s, and they each get extensive renovations with each President. The Oval Office itself has only been around since FDR, and the Resolute Desk has only been in the Oval Office since the 1970s, and even then it was taken out during George HW Bush’s presidency. Not really a situation where you're looking for a compromise.
No, there was never any possibility for compromise.
The Democrats (online) are screaming about permitting violations and asbestos violations and probably NRHP violations, in a way that makes it clear to me they would rather subject Trump to 4 years of bureaucracy than let him build anything, regardless of whether it is "reasonable". Trump is constantly a "you can't tell me what to do" personality, and benefits politically from doing it in a way the Democrats dislike.
No. The Democrats are complaining because
1. The costs are now estimated to be $3000/square foot in a city where high-end building costs are $700/square foot. And the cost overruns look like they'll go into pockets of shady Trump cronies.
2. There are no finalized plans to even criticize, yet. But from the drawings several architects question whether it can built the way it's been depicted. So that means more cost overruns and more graft and corruption.
3. It's an eyesore. And it destroys the lines of the White House. Esthetically it's worse than paving over the Rose Garden.
>1. The costs are now estimated to be $3000/square foot in a city where high-end building costs are $700/square foot. And the cost overruns look like they'll go into pockets of shady Trump cronies.
But it's being funded out of private donations, not tax money. So what's the problem?
That's actually worse. There won't be the transparency one gets with government budgeted spending, and the surplus funds will be funnelled off into the pockets of Trump cronies and Trump front companies.
How is that worse? Why should we care if private donations get spent inefficiently?
I think you're confusing the specific things they're complaining about with the reasons they're complaining.
Let's face it, there's a world in which Obama did this. (I have trouble imagining a world where Biden did this, for some reason, but can easily imagine the Obama ballroom.) And in that world, the Democrats think it's a fine idea and the Republicans have a laundry list of objections to it.
All of this makes it a very boring issue, not worth thinking about.
In an ASI post scarcity utopian world, would we still control our destiny as a species or will it be decided for us? And if so, won't stripping away our agency take away a big part of our sense of meaning and make us miserable regardless of the abundance?
I've written about this in more detail
https://open.substack.com/pub/thedistantpresent/p/the-great-detachment?utm_source=share&utm_medium=android&r=5a1s78
Have you tried to explain the economic system? Because we can’t all get what we want. The Culture had infinite space. We have a finite earth.
We don't control our destiny as a species anyhow. We are squeezed, shoved, and tossed into the air by other entities while trying to meet needs we did not choose to have. We're mostly just making it up as we go along.
"In an ASI post scarcity utopian world, would we still control our destiny as a species or will it be decided for us?"
Will we be The Culture, with everything run by the Minds? Some people think this is fantastic world and can't wait for it to happen.
But how would we spend our time? Perhaps the educational system could foster creativity in all of us, rather than turning us into economic cogs? I suspect that most people would be terminally bored because they lack the inner resources to occupy their time. Life would be a vast daytime TV existence.
Especially with no stakes to anything. I anticipate a lot of gambling/games of chance to feel a semblance of risk and reward that even ASI can't optimize
Underrated option IMO. In the same vein, birthing a benevolent machine god also gets you this outcome.
Knowing in advance that you’ve actually built a benign/aligned superintelligence is of course the hard part. I think the assumption is just that in the branches under “loss of control”, the percentage of worlds where the ASI turns out to be benign round to zero.
"Then it starts to get uncomfortable, the old man prefers cereal for breakfast, but oats are what he finds on the kitchen counter with the note “This is healthier for you”. He wakes up to the furniture rearranged because it would make the house more spacious. He wanders over to the garage and looks at a half empty work table. “I got rid of some of your tools, they were too dangerous for you at your age”. He spots an emptiness at the corner of his bedroom that draws him like a vacuum. He can’t shake the feeling that something’s missing. Ambling over, a note “I got rid of your golf clubs, you’re too old to play anymore”. A slow suffocating loss of agency, one intrusion at a time until there’s no defensive boundary between what he wants and what is decided to be best for him. "
I also think something like this will happen, and will be turbocharged by digital mind clones. Think of social media algorithms, which know your preferences and personality traits, as the ancestors of the clones, which will approximate your thinking and behavior almost perfectly. An AGI charged with watching over you to ensure your wellbeing would run experiments on your digital mind clone to find optimal outcomes for you in the real world. It might discover things about you that you're in deep denial about or are even completely ignorant of, and it would make sudden and initially unwelcome changes to your life in the pursuit of that optimality.
Imagine the old man's robot butler holding him down at breakfast and force-feeding him smelly Green Eggs and Ham while saying "Trust me, you're gonna love this."
True, how will we feel about this though? If there was a hyperintelligent earpiece attached to you that always told you the best decision to make for every moment of your life, would you ever contradict it out of curiosity or to assert your independence? Would it anticipate this and recommend decisions that give you a false sense of independence? Would you notice this and hate it even more? It all gets really messy if you think about it.
Scott wrote a story about that: https://croissanthology.com/earring
Agreed. If you wanted true independence, you'd have to agree to be upgraded into a superior, non-human mind that your AGI custodian couldn't model. Maybe that will be the backdoor strategy to getting rid of the human race and duping us into becoming productive again.
Why assume ASI are ignorant of the social and emotional aspects of human nature? Why assume they're not going to include in their calculation/simulation human resistance to change and preference for familiarity? Indeed, why assume an ASI is incapable of accounting for the human need to feel important and in control of your own destiny?
I agree with you. The questions that remains: Does anything change, then? Why bother going for ASI if it somehow doesn't actually change anything? It might move the baseline up, and human nature adjusts within a year and will be just as happy/unhappy as before, no?
"ChatASI, I need some meaning in my life. What can I do?"
"I noticed this trend through your behavior patterns, but I've been waiting for you to bring it up so I know you're ready for a change. I have three options, but they all require some significant life changes that I'll guide you through over the next six months. I know you can handle them despite your 80 IQ. First, there are some archeological digs in South America that for some peculiar reasons need human assistance. Second, there's a colony ship that is departing within the next two years. You have generic markers they need in their pool of applicants, but it will be an all awake journey, so you'll need to do some training before you can volu..."
Seems preferable to, "I can't get a job, so I'll rot in my parents' basement playing video games and hope I win the lottery."
Progress and accountability update on my upcoming freemium web app, BetterQualities. The app is designed to help users let go of unskillful mental qualities like worry and procrastination as they arise, and cultivate skillful ones like happiness and agency instead.
After a few snags, the pre-launch landing page at www.betterqualities.com is up and running. If you’d like a heads-up when the app goes live, you can join the waitlist there. I’d also be really grateful if you filled out the short survey about which skillful and unskillful qualities you’d like the app to cover. Thanks to everyone who’s already done so!
As for the app itself, frontend data management is still a bit ad hoc; I need to implement a proper, principled solution. Once that’s done — along with some UI polish and payment integration — we should be good to go.
I also started a Substack blog, A Metta Analysis, in which I'll explore the app's theme (skillful and unskillful mental qualities) in more depth. The content is still sparse, but feel free to subscribe if the topic interests you.
I won’t be posting updates in the next two open threads, as I’ll be on holiday (my Europoor mentality in action :D).
Error message when I attempted to join your wait list - "Failed to submit email. Please try again."
Ah, thanks for the heads up – this is a weird bug I wasn't aware of. Could you try again with the subdomain "www" included (so https://www.betterqualities.com/)? Sorry for the hassle.
Edit: It was a CORS issue, fixed.
I'm trying not be mean but it sounds like the classic trap for procrastinators : procrastinating by "doing something" that's actually procrastinating (make a list, make a plan, make extensive preparations, read more instructions, log stuff on an app, do everything except the thing to do). Been there.
This should be included in your pitch deck. That's a large market of repeat users.
Thanks, that’s a good point. In my mind, the app is meant to be a ladder that, once climbed, can be discarded. But there’s definitely a risk that some users end up lingering on the ladder — playing around instead of climbing it — and getting even less useful work done.
Any good blogs / books / whatever on underpopulation / lowering fertility and their causes?
At some point I tried to read up on this topic, because at least at the popular nonspecialist level, there are lots of wildly diverging strong opinions. It seems to be the kind of question that motivates people with strong ideologies.
One blogger I found is Alice Evans, writing "The Great Gender Divergence" (https://www.ggd.world/). She seems to be a real specialist in the field, with a strong interest in what is going on in the world at large, and not just in the usual Western countries.
I wrote a summary of the research on how geomagnetic storms could impact the electrical grid: https://existentialcrunch.substack.com/p/space-weather-and-critical-infrastructure
tldr: Not as bad as I thought when I started writing, but still catastrophic.
Thanks for doing this. This is something I've been wondering about for years.
I guess I see now what the Reddit is telling me to get back to work. I free think and free write and my writings are indefeenriabke from AI(proven) to the point where I need to leave spelling msktakes.
So I’m talking to a person on the Reddit and guess what, his writings are 67% AI on the school system so he needs to “redo” his own work because the professor or PH.D overseeer is not smart enough to understand creative brilliance.
This is something I don’t understand. 1) Why are idiots getting the PH.D and using their overseer power to silence the simple minded brilliance of the worlds AuDHD, Autism and ADHD populations. As the unofficial official despised loved spokesperson for the AuDHD I need to bring this to the attention of the complex genius and brilliant minds of the world to figure out. Gifted accepted - I’m sure I will double check your work to make sure it’s original and not the AI garbage that is everywhere today. Same with the over see’er PH.D people.
I guess to end I came up with the first half of the theory of everything formula from my own AuDHD peabrain using a simple method and writing sample. I then got the boot from the SOL community(very understandable, my fault. Maybe I try to make it back for boy’s night on Monday…. Hmmm…..)
Where was I… oh yes… the first half of the formula for the theory of everything is 1/2 ADHD brain plus 1/2 Autism brain = AuDHD brain plus music(as a bridge medium)
The bridge medium allows an AuDHD mind to join the ADHD and Autism side to now be able to hyperfocus and increase or decrease the trance state by using the volume control as adjusting the volume.
The trance music is for AuDHD individual to decide. I use the radio and Shazam for a 15-20 song playlist for songs that just hit me.
I am in a small trance now and this is my first writing sample in an effort to show what free writing and free thinking looks like from my AuDHD bridged mine who just writes and posts without worry and does not register consequences of his writing sample postings.
Research study found in the r/ gifted as well as cnn. Point is Jillian Hynes and Sheila Wagner from Aramark. Jillian seen undiagnosed autism traits in a remote camp and triggered me on purpose. This is all documented and provable because soon I will be making this entire situation public. Possibly though the Neurospicy community but 100% going public with everything. I can prove ANYTHING I say… and I won’t be silenced anymore.
1st post of many. If people of the deepstacks don’t like what I say, then take it up with the OWNERS of the deepstacks.
This is ALL
Have a good day :)
I hope you like my budding creating writing styles :) It’s comparable to Nikola Tesla, or so I am told. Just wait until my mind rests from the cubic posting yesterday on Reddit. There will be ALOT of deep thoughts coming out… exactly like this.
You are invited to describe a movie, novel, RPG adventure, or other narrative work that could be made based on the prompt, "Hammertime."
Movie: The uplifting tale of a woman who wastes her youth getting hammered and getting nailed whilst feeling detached, before traveling at last down the sparkly rainbow psychedelic road to find her true gender: Uglyshark. Pronouns: Ugsh, Ug*sh’s (central asterisk is to honor the buried s’s right to be punctuation-adjacent like its twin)
People say John Henry died of exhaustion in a tunnel racing a steam-powered drill.
Those people are wrong.
Henry was making great progress in that tunnel when it collapsed, trapping him inside with his shaker, Dan. Realizing they had limited air, Henry worked with Dan to dig their way out. When their lantern finally gave out, Henry kept hammering in the pitch black, finding he could still tell where Dan's spike was from the sound, and that he wasn't tiring out for some reason.
They work through the night and into the next morning, miraculously breaking out of the other side of the mountain. But when the people they meet include a tough frontierswoman, a farmer with the power to control water, a wind-riding cowboy, a giant firefighter, and an even larger lumberjack, Henry wonders if he's still in America.
A time machine steered by Stanley Kirk Burrell.
Johan Larson,
the hammer skipped off the steel nail,
And crushed my fingernail.
My workmates are trying hard not to laugh
and I am trying hard not to cry.
I once had an idea for a game or movie about a guy with the superpower of transforming into a giant hammer. It sounds like a ridiculous power so everyone mocks him, but it turns out to be surprisingly useful (e.g. you can instantly get out of handcuffs by transforming to a hammer and back)
"Every time he swung his mighty hammer it was as if he aged years in an instant..."
Action movie with heavy use of slow-motion where the hero flings hammers at people.
Andrew Sachs making a hammer sandwich.
Well... it's got to be a Mike Hammer movie, and I imagine he's going after MC Hammer. Whether it's a generic Mike Hammer story, or a Highlander-style "there can only be one" is negotiable.
Maybe mix in the Hammer Horror monsters.
Not sure if this is the motivation, but this seems like a good way to measure the "creativity gap" between humans and LLMs. Given an open-ended prompt, how much diversity in responses do you get in each case?
Everyone is welcome to participate, from caged silicon gods to squishy meatbags to semi-structured ripples in the ether.
> to semi-structured ripples in the ether.
That reminds me of an idea I had for an alien civilization of beings that consist of specially patterned energy ripples within the jet of a black hole, that can only exist due to the extreme physics of that environment. They discovered quantum gravity before inventing the wheel, but then sadly died out when their black hole became less active. If you listen to the right frequencies in the right spot, you can still hear their desperate cries for help broadcast across the cosmos.
The Wizards' War is over, and an exhausted world crawls out of the wreckage to rebuild. But some things are forever changed by the titanic energies released in the struggles. Storms that might have yielded hail now drop sheets of sledge hammers. No structures are truly safe from the hammer storms, and the finest scholars strive to predict where and when they will strike. Meanwhile much of the population in stricken regions has taken to living semi-nomadically in yurts so they can flee when hammertime comes.
This almost sounds like Adventure Time
A software developer that makes software for the porn industry discovers an algorithm that makes everything more efficient, that corporations then fight to control, but the internet and AI explode to use everywhere..
"When all you have is a hammer, every problem begins to look like a nail."
Easy mode: make a movie of "Lucifer's Hammer." Slightly harder mode: remake of "Armageddon" with Bruce Willis' character played by MC Hammer.
Harder mode: the comet/asteroid is made of tachyons (keep yer traps shut, physiciststs -- this is Hollywood) and must be diverted so as to avoid a total restart of time and going back through all of history.
It broke my heart when the mean ole no-fun killjoy physicists turned around and got rid of tachyons as anything more than purely hypothetical and not at all likely to exist.
Why can't we just have nice things?
An overly aggressive carpenter decides that clocks are too thick. They should be thinner, flatter. As flat as he can get them really.
1) Is it true that nowadays the value of a person is stronger derived from the person's intelligence than hundred years ago, where things like kindness, behavior and manners played a bigger role?
By value I mean whether other people would regard this person as a person to look up to, not just the economic value.
2) If true, is 1) a good thing in utilitarian terms? (If not true, would it be a good thing?)
On #1, I don't believe so. Intelligence strikes me more as a tool that can be used to do things that can build status, rather than a grantor of esteem in and of itself. Take, for example, the case of the gifted student. People may view a gifted student as having great opportunity, or as a person to cultivate, but they don't just look at a kid, see how smart they are, and suddenly start looking up to them. The hierarchy of learned professor to gifted student still places the student below and professor above, even if the student is naturally "smarter."
Likewise, if I hire a college grad who's smarter than me into my department, I'm thrilled at the opportunity to train them up, but I'm not just jumping straight into deferring to and being led by them - smart is great and all, but at that stage they have no experience. Raw intelligence is a tool, but having a really cool hammer or powerdrill isn't something that will lead to you being admired unless you use it to do something admirable.
Note that one hundred years ago, a push for modernity was in full swing. Model Ts were replacing horses, as did modern roads, electrification, modern medicine, etc. Intelligence was exceptionally celebrated and rewarded then, with capital "I" intellectuals and experts having tremendous influence. The Nobel prize was first given n 1901 and Mensa Intentional established in the 1940s. Imagine going from carpenters on a job site to using manual hand saws to using centralized belt powered saws to using distributed electrified tools all in a lifetime. One hundred years ago, intelligence applied may have been more valuable than today as society was in the middle of a very visible and tangible physical transformation. Today it is in a digital transformation.
Expertise is still respected, but also seen as suspect. So maybe valued less today, or maybe more. I don't know, it also depends on local needs and ability to apply that intelligence to well, something. Are the undercurrents of Ludditetism weaker or stronger today...
Maybe ask reddit's ask a historian? It might be easier to ask about the valuation of experts and professors and traits. Also, note there were tremendous local variations in evaluation, as the world was far more diverse in language, culture, and belief systems before today's modernity.
I'm not sure if it's true, but if true, I don't think it would be a good thing. Value systems shape people's behavior, not just their feelings, but there's much less you can do to increase your intelligence than you can to increase how kindly you treat others. And you can influence how altruistically people apply their intelligence even if you can't influence how intelligent they turn out to be.
I remember years ago at a Less Wrong/SSC meetup, I talked to someone there about the book "Flash Boys," on the business of high-frequency trading. The author was of the opinion that it was bad, and a misuse of human resources, because it generated a lot of revenue without plausibly generating value for society. I thought he made a strong case, but was interested to hear the "pro" side.
The person I spoke to was not a fan of the book (without having read it, it turned out.) He had worked in HFT, and he objected to the author taking aim at it. Not because he objected to the characterization that it only concentrated wealth and didn't provide value to society, he acknowledged that was true. But because, he said, smart people ought to be able to apply their intelligence in our society to become rich. Apparently, he didn't think it ought to be necessary that they do so by creating value for anyone else.
I don't think our social values can make people much more or less intelligent. But they can promote or suppress attitudes like that.
> On the business of high-frequency trading.
I know two people who went into that business after finishing their PhD. The contribution to society is not literally zero, at least not for the company they worked for. They are providing instant offers, which is very important for the stock market to work properly and to form prices even when there is little activity from standard traders. In fact, part of their revenue (not the lion's share) came from the stock exchange paying them directly for their service.
But I agree with the argument qualitatively: you wouldn't need highly intelligent people to provide those services, and the amount of money that the HFT companies earn exceeds the value to society by very much.
Seems a bit like professional chess playing; you might spin off a few interesting games that people can study, but basically you are pitting your wits against the other guy for a sum not much greater than zero.
But intelligence is neutral in its way, and some intelligent actions would have a value less than zero, making chess playing look a bit better after all.
Raw intelligence, probably, but only in as far as it's revealed through some sort of significant actions or accomplishment. The potential of intelligence today is a lot higher, there's more fields to apply it to and succeed and the trope of the socially off-putting genius has been around for a while so people are willing to overlook manners. I don't know if it's a good thing though.
Do you perhaps mean the economic value…?
No, I mean whether other people would look up to that person. I have edited my post to make this clearer.
I wrote a novel about the prospect of China building the first AGI (following my CAST alignment agenda). It came out today!
https://raelifin.substack.com/p/red-heart-is-now-available
https://www.alignmentforum.org/posts/NQK8KHSrZRF5erTba/0-cast-corrigibility-as-singular-target-1
I'm glad I saw this. I'm definitely going to read it.
For anyone who doesn't know, this is the same author who wrote the Crystal Society series, which is one of my favorites. Eliezer Yudkowsky said of the series: "[Crystal Society] seems to belong in the very, very tiny subset of AI stories that are not bloody stupid, a heroic and almost unbelievable accomplishment."
Do we need clocks at all? My proposal for abolition:
Anything requiring an appointment can be automatically allocated with a countdown timer and frequent reminders.
Office hours replaced with productivity targets.
Anyone doing work requiring daylight optimization can begin work whenever they want without having to go "oh no it's 5am" because 5am doesn't exist any more.
They pretty much have gotten rid of clocks. I don't like to wear a watch and frankly it annoys me having to ask the time sometimes from phone slaves.
If you get rid of clocks, you get rid of the ability to passive-aggressively stare at the clock in order to make the other person understand they should get the hell to the point. I for one will not stand for it.
Great point. Also no more double takes.
"We meet every Tuesday at 7" would be awkward to convert to "We meet in 34 Hours, and then again every 168 hours after that"
Why stop at clocks? If computers can do everything we need, then we don't need to do anything ourselves, so why keep around any of the artifacts that helped us back when we did? That's not just physical objects or logistical tools: it's a whole wealth of concepts, models, and vocabulary that we just don't need anymore.
"Let's meet in 22 hours" is just objectively less clear than "Let's meet at 9 AM tomorrow." And it only gets worse the farther out you go - if I propose going to an event in 2 months, 15 days and 14 hours, there is no way that you'll know what date and time I'm referring to in your head. (And if we're chatting for an hour before you accept the invitation, I have to update my proposed countdown time before you put it on your calendar!)
And it gets still worse if you have to worry about scheduling conflicts. Yesterday, someone asks me if they can meet in 3 days and 12 hours. Today, someone asks me if they can meet in 2 days and 10 hours. Can I accept that invitation? They sound pretty different, but depending on what hour exactly the second person invited me, they might be referring to the same time. Meanwhile, two people asking me to meet at "Noon on Nov 5" is obviously referring to the same time.
Lastly, countdown timers would still have to be *implemented* using a calendar and clock. If a timer gets stopped for any reason (power outage on your computer, etc.), you need to recalculate the time remaining when you start up again. The only way to do this is to compare the current time to the appointment time, using a clock.
What problem does this solve? Just time zone annoyance?
I have considered similar solutions, but there IS a useful property of times and clocks. If I tell you it will be 826 hours until your next appointment, in your head, do you know that's going to land during the middle of your sleep cycle? Or your dinner time?
Times tell you, in advance, what you expect to be doing around that time. I switched my clock to 24 hours and caught myself just converting even when there was no external reason to, in order to understand what time it was. The reason is that 1700 is connected to less concepts in my brain than 5pm is.
I think clocks are too useful for appointments and co-ordinating meetups etc., although I wholeheartedly endorse the idea of replacing office hours with productivity targets. There's nothing more annoying than having to bum around the office doing nothing because you've done all the work you can but haven't completed all the arbitrary number of hourse your boss requires you to be present for.
Presumably computers would need some sort of internal clock but we could essentially staff out our diaries to them, with enough frequent reminders that we never have to look at the time again.
This was supposed to be a reply to Peter Defeel
_Convince Bruto the world is ruled by lizards!_
(Hi!)
I've been making an experimental browser game on the topic of conspiracy beliefs and how they arise - curious to hear what this community thinks. r/slatestarcodex seemed to like it a month ago; this is an update with better UI & graphics.
https://fiftysevendegreesofrad.github.io/bruto/?sss
The underlying model is a belief network, though for the purpose of gameplay not strictly Bayesian.
Full disclosure: Although I’m only testing a game, I’m doing so today as an academic researcher so have to tell you that I may write a summary of responses, and record clicks on the game, as anyone else testing their game would. I won’t record usernames or quote anyone directly. If you're not ok with that, please say so, otherwise replying here necessarily implies you consent. Full details linked from the title screen.
This was fun, but sadly on mobile I had this thing where zooming in and out would permanently shrink the mind map until the mind map became unplayably small :( so wasn’t able to finish
Best of luck with the research!
This is pretty fun
<mild spoiler>
Im not sure if it was intentional but its pretty funny/tragic that the first domino piece that brings everything down is influencing "my life is hopeless" -> "there is some hope things are not that bad"
I'm interested in what research you're doing, and what conclusions you're intending to draw from watching people's playthroughs.
I should tell you that I got pissed off with the UI and ended up just going around the circle clicking every time "Influence" became available. I don't know what deep insights that's supposed to give you, unless the subject of your research is player responses to game design, of course.
I would have liked to be able to move the belief icons around on the screen, to lay out the map of what was going on for myself, then I could have made better sense of what your mechanics were supposed to be. I would rather have been able to do everything from the graph view, and play with the belief model itself a little more directly.
As it was I was reading the text every time to grab seemingly nonsensical connections (like the belief "I am useless" being the thing that stops a man believing in chemtrails.)
I did the easy mode, which was a straightforward propagation through the network. I thought hard mode was going to involve bringing some beliefs up and down again, in order to trap certain priors and end up with contradictory beliefs where you need them.
Unfortunately I seemingly softlocked myself, and couldn't be bothered to keep bringing up those laggy windows for each belief to find where to go next.
The lag is not your game's fault, my computer is shit because Windows is artificially degrading itself in an attempt to force me to upgrade to Windows 11 where they can better track everything I do and send that information to the Jews, who are also using ChatGPT to mine Bitcoin in every open tab while feeding me seemingly-helpful but ultimately timewasting answers (having rendered search engines and StackOverflow useless to force me to use AI in the first place) all as part of an effort to destroy my productive output and render the West torpid and inert.
I'm an AI engineer and I'd like to switch my career track to AI safety, probably mechanistic interpretability (because I find that hot). I genuinely like the field and I've been keeping up to date, but don't have impressive mech-interp projects to show off yet. Any advice? (Any intros?..)
(I do intend to talk to 80000 hours and probably apply for an OpenPhil career development and transition funding if there's going to be lengthy portfolio-building involved.)
You may find this guide from Neel Nanda helpful: https://www.alignmentforum.org/posts/jP9KDyMkchuv6tHwm/how-to-become-a-mechanistic-interpretability-researcher
I'll be going to Buenos Aires before this month ends. Any recommendations of places to visit / thinsgs to do?
Anyone want to steel man the argument for a clock change? I think it is useful here in the U.K. but that’s a minority opinion. However the people who oppose the clock change often don’t get the consequences.
Responses here are pretty sane. I think I was negative 50 karma on a British subreddit for defending the clock changes. People do not like it one bit.
It might be because the change back to winter time accelerates the seasons. Today sunset was 16:38 here in the Cotswolds. It won’t be bright past 5pm for 3 months. 2 months ago, just the end of summer, sunset was 8pm, which was the case (or later) for the previous 5 months. Past 9pm for high summer.
The case for DST is easily made. In the northern hemisphere where daylight is 16 hours or more, the sunset should be at 8pm, the sunrise should be at 4am, a waking time which is useless to most. Changing the clock gives us glorious late sunsets, even later on the uk as we have closer 16.5 hours daylight.
The case for not having DST in the northern hemisphere winter is easily made. There are only ~8 hours of sunlight so keeping DST pushes the sunrise to 9am* making the morning commute dark until you get into the office. You gain an hour in the evening which is also useless to most as you will be at work. Simply put, you can’t waste morning light in winter. That’s been tried and reversed in many places.
So a clock change is the solution. The EU is continuing to threaten get rid of the clock change (and the U.K. would likely follow) without much guidance as to the consequences or whether countries should stick to DST or not, which speaks to a lack of leadership and probably understanding. Good luck with challenging the Chinese if you can’t do this.
* 9:15am here - we have slightly less than 8 hours daylight. Close to 10am in western Ireland.
If you tend to wake up around sunrise (or you have kids who do, or you get woken by birds) then it's pretty annoying to have the sun rise before 6... you're stuck sitting around for too many hours before you can start your day.
But the sun can set at 9pm or whatever if it likes, that's fine with me
It acclimatizes us to jet lag and time zone changes, which is an important skill to have in the modern world. In fact, I think we should be more radical and move our clocks forward and backward by three hours! Or maybe do randomized clock shifts?
A data-point on steel-manning it - the US already tried going on permanent daylight time in 1973 as a 2 year trial, and it turned out to very unpopular and didn't even last the full 2 years before being ended: https://en.wikipedia.org/wiki/Emergency_Daylight_Saving_Time_Energy_Conservation_Act
Perhaps the only issue was that they chose to go on permanent daylight time rather than permanent standard time, but that permanent DST *does* seem to be the main proposal again: https://en.wikipedia.org/wiki/Sunshine_Protection_Act
---
And, it's less of an issue for UK which is relatively geographically small, but in the US part of the problem is that what makes sense for Florida in the south (a warmer, sunnier part of the country) may not make sense for, say, Minnesota in the north.
You can let each state decide, but IMO the worst possible outcome here is that *some* states keep DST and others go permanent daylight time. I used to live in Chicago and went to school in Indiana and Indiana didn't practice DST and it was really annoying to have to always have to think about whether it's DST or not to figure out if Indiana is an hour ahead or not.
We can already have this situation with states not practicing DST (currently only Arizona) but opening the door to permanent daylight seems like it'd make it worse.
I don't think there's a universal best choice between DST and standard time. For people who live in northerly latitudes, DST is good in the summer, since it makes for long pleasant summer evenings. On the other hand, when I lived in Arizona, I liked the fact that the state didn't go on DST: it allowed people to do outdoor things comfortably in the cool hours of the morning before work or classes began; and it brought an earlier end to the hot day, so that there was an additional cooler hour in the evening before time to go to bed.
I definitely think it’s a northern hemisphere thing. South of 40 latitude probably no need.
Frankly I'm in favor of having a clock change but reversing it so that DST takes place during the winter instead of the summer. This way the sunset stays at a more consistent time throughout the year, and in my opinion at least in the modern day, Sunset is more important than Noon as a time to base things around
That seems to be the worst of both worlds.
Nate silver wrote this last year. I'm not convinced but he makes some good points
https://share.google/XgVNciaSeSxElfY0U
Yes, my thoughts agree with that.
I'm not going to actually steelman the argument but I will say what I think is the obvious solution to DST. All clocks are reset to 12 noon aligning with solar noon based 20 mile at the equator vertical bands. The clock resets automatically each day at 1201am and everything just flows smoothly
Pretty neat, but time zones would fractionally shift daily, no?
1. Assume darkness enriches the soul.
2. Let's make sure we maximise the hours we spend sleeping through the morning light so we are awake for as much of the evening darkness as possible.
Obviously completely irrelevant for farmers since they respond to animals and you can't change animal clocks. But if we want to keep children and office workers in darkness then the current clock change is essential. It helps keep the margins of winter darker. Going the opposite direction would have the terrible consequence of making summer evenings lighter.
As a compromise we could just shift the clocks by half an hour and be done with it forever?
The world is already uniform and stale enough. The clock change makes the world more dynamic.
Presumably the official reason is steelman enough?
It is. It’s been forgotten though.